Archive for May, 2012

Future of Intellectual Protectionism and not much Innovation Policy

Wednesday, May 23rd, 2012

I read all of the pieces selected for a „Future of copyright” anthology resulting from a contest run by the Modern Poland Foundation (apparently the winner of a small cash prize will be announced tomorrow; I highly recommend all of the pieces below and commend the judges for their selections):

7 are fiction (the 3 exceptions are me, Spitzlinger, and Togi). 5 of these are dystopian (exceptions: Binns, Mansoux), 4 of which (exception: Å»yÅ‚a) involve some kind of fundamental loss of personal control as a result of intellectual protectionism (even more fundamental than drug war style enforcement involves, which Å»yÅ‚a’s does concern). 3 of these (exception: Eddie) involve extrapolations of DRM, 2 of which (exception: Melin) involve DRM implants.

I’d like to see versions of the dystopian stories written as IP propaganda, e.g., recast as RIAA/MPAA pieces from the future (several of the stories have funnily named future enforcement organizations in that vein). Such could be written as satire, apology, or even IP totalist advocacy (utopian rather than dystopian).

Of the dystopian stories, Solís is probably most dystopian, Eddie most humorous, and Betteridge overall best executed. Å»yÅ‚a needs a bit of development — the trend posited is incongruous and unexplained — but maybe due to an unknown factor to be suggested by fictional future freakonomics, or perhaps I just missed it. Melin ends with some hope, but annoys me for contemporary reasons — why would the recipient of a body part artificially grown with “open” methods be constrained in the disposition of that part by a “Creative Commons license” on those methods? Another reason to discourage use of CC licenses for hardware design.

The two non-dystopian stories take the form of a “letter from the future” in which various “open” movements and “models” win (Binns; if I had to bet on a winner of the contest, I’d put my money on this one) and an allegory for the history and projected future of copyright (Mansoux; probably the piece I enjoyed reading most).

Of the 3 non-fiction pieces, Togi is most non-standard — a rant in the form of lemmas — and fun, though briefly goes off the rails in asserting that “those entities which represent the greatest tax gain will be preferred by government.” If that were the case, all that is prohibited would instead be taxed. Statements about “revenue” leave little wiggle room, but I suppose a charitable interpretation would include in “tax gain” all rents to those influencing power, be they bootleggers, baptists, or those directly obtaining tax revenue. Spitzlinger accepts the stories my piece rejects and suggests something like the Creative Commons Attribution-NonCommercial-ShareAlike license be the default for new works, with the possibility of additional temporary restriction (a one-year usufruct, perhaps?).

All of the pieces evince unhappiness with the current direction of information governance. Of those that reveal anything about where they stand on the reform spectrum (admitting that one dimension makes for an impoverished description of reform possibilities; that’s one of the points I hoped to communicate in my piece) I’d place Binns, Melin, and Spitzlinger away from abolition, and me, Mansoux, and Togi toward abolition.

I expect the contest and anthology to be criticized for only representing reform viewpoints. Sadly, no maximalist pieces were submitted. The most moderate reform submission didn’t follow contest rules (not a new piece, no license offered). More than alternate perspective versions of IP dystopias, I’d like to see attempts to imagine future systems which increase private returns to innovation, perhaps looking nothing like today’s copyright, patent, etc., and increase overall social welfare — I’m dubious, but please try.

Update 20120524: The two most fun and non-standard entries wonMansoux, with an honorable mention to Togi. I now must also congratulate the judges on their good taste. Read those two, or the whole anthology (pdf).

International law should mandate much higher standards for military personnel

Wednesday, May 23rd, 2012

The U.S. army says it will reduce personnel from 570k at the peak of the Iraq occupation and 558k as of March to 490k in 2017 in part by lowering the number of personnel with “moral, medical and criminal” problems.

Way too small a reduction if the U.S. is to stanch its long-term decline resulting from maintaining an empire. But nevermind that. Using criminals as occupiers is an invitation to atrocity — as is using teenagers as occupiers. U.S. policy, indeed that of all nations, ought eliminate any possibility of military employment for criminals and those under 21 years of age. Any other policy ought be a violation of international law.

Too little, too late, perhaps, depending on how quickly human military personnel are replaced by robots.

Perhaps of more longstanding relevance (it could include drone actions) invasion/occupation ethics also ought be a matter of international law.

The market euphemistically known as the community of nations must do a much better job of self-regulating…or else!

Have a good upcoming weekend, including those in places where Memorial Day is observed.

Go! Oakland Warriors!

Tuesday, May 22nd, 2012

A paid entertainment basketball franchise called the Warriors is apparently planning to move from Oakland to San Francisco, supposedly a turnover by Oakland officials, who are in denial.

Instead, these officials, and all Oaklanders, ought celebrate the shipping off of anti-intellectual violent extortionate spectacle. Pity not shipped off further.

Mayor Jean Quan and others ought be ashamed of lusting after paid entertainment franchise owners at all. Quan and company are fond of calling Oakland “a city of the 99%” and thus should reject the appalling wealth transfer to the tiny fraction of the 1% that is pro sports. The team owners claim “no new taxes” for San Franciscans. Yeah, right, and I have a second Bay Bridge to sell you. All of their incentives point to beggaring the municipality forever more.

Previous: Things that bring all the classes and cultures in a community together.

technicaldebt.xsl

Thursday, May 17th, 2012

Former colleague Nathan Yerlger has a series of posts on technical debt (1, 2, 3). I’m responsible for some of the debt described in the third posting:

We had the “questions” used for selecting a license modeled as an XSLT transformation (why? don’t remember; wish I knew what we were thinking when we did that)

In 2004, I was thinking:

The idea is to encapsulate the “choose license” process (see in a file or a few files that can be reused in different environments (e.g., standalone apps) without having those apps reproduce the core language surrounding the process or the rules for translating user answers into a license choice and associated metadata.

Making the “questions” available as XML (questions.xml) and “rules” as XSL (chooselicense.xsl) attempts to maximize accessibility and minimize reimplementation of logic across multiple implementations.

I also thought about XSLT as an interesting mechanism for distributing untrusted code. Probably too complex, or just too unconventional and ill-supported, and driven by bad requirements. I’ll probably say more about the last in a future refutation post.

Anyway, I’m sorry for that bit. I recommend Nathan’s well written series.

Open Source Semiconductor Core Licensing → GPL hardware?

Saturday, May 12th, 2012

In Open Source Semiconductor Core Licensing (pdf; summary) Eli Greenbaum considers when use of the semiconductor core designs under the GPL would make designs of chips and devices, and possibly physical objects based on those designs, trigger GPL requirements to distribute design for a derived work under the GPL.

It depends of course, but overall Greenbaum’s message for proprietary hardware is exactly the same as innumerable commentators’ messages for proprietary software:

  • If you use any GPL work, be extremely careful to isolate that use in ways that minimize the chances one could successfully claim your larger work triggers GPL requirements;
  • Excluding GPL work would be easier; if you want to incorporate open source works, consider only LGPL (I don’t understand why Greenbaum didn’t mention permissive licenses, but typically they’d be encouraged here).

Greenbaum concludes:

The semiconductor industry has been moving further toward the use of independently developed cores to speed the creation of new devices and products. However, the need for robustly maintained and supported cores and the absence of clear rules and licenses appropriate for the industry’s structure and practice have stymied the development of an open source ecosystem, which might otherwise have been a natural outgrowth of the use of independently developed cores. The development of a context-specific open source license may be the simplest way to clarify the applicable legal rules and encourage the commercial use of open source cores.

That’s something like what John Ackermann wanted to show more generally for hardware designs in a paper I’ve written about before. Each leaves me unconvinced:

  • If one wants copyleft terms, whether to protect a community or proprietary licensing revenue, use the GPL, which gives you plenty of room to aggressively enforce as and if you wish;
  • If you don’t want copyleft terms, use a permissive license such as the Apache License 2.0 (some people understand this but still think version tweaked for hardware is necessary; I’m skeptical of that too).

Greenbaum does mention Ackermann’s paper and TAPR license and other “open hardware” licenses I previously discussed in a footnote:

While “open hardware” licenses do exist, they do not take account of many of the complexities of the semiconductor device manufacturing process. For example, the TAPR Open Hardware License does not address the use of technology libraries, the incorporation of soft cores in a device design, or the use of independent contractors for part s of the design
process.

I think this highlights a difference of perspective. “Open hardware” people inclined toward copyleft want licenses which even more clearly than the GPL impose copyleft obligations on entities that build on copylefted designs. Greenbaum doesn’t even sketch what a license he’d consider appropriate for the industry would look like, but I’m doubtful that a license tailored to enabling some open collaboration but protecting revenues in industry-specific ways would be considered free or open by many people, or be used much.

I suspect the reason open hardware has only begun taking off recently (and will be huge soon) and open semiconductor design not yet (though for both broad and narrow categories people have been working on it for well over a decade) has almost nothing to do with the applicability of widely used licenses (which are far from ideal even for software, but network effects rule) and everything to do with design and production technologies that make peer production a useful addition.

Although I think the conclusion is weak (or perhaps merely begs for a follow-up explaining the case), Greenbaum’s paper is well worth reading, in particular section VI. Distribution of Physical Devices, which makes the case the GPL applies to such based on copyright, contract, and copyright-like restrictions and patent. These are all really important issues for info/innovation/commons governance to grapple with going forward. My hope is that existing license stewards take this to heart (e.g., do serious investigations of how GPLv3+ and Apache 2.0 can be best used for designs, and take what is learned and what the relevant communities say when in the fullness of time the next versions of those licenses are developed; the best contribution Creative Commons can probably make is to increase compatibility with software licenses and disrecommend direct use of CC licenses for designs as it has done for software) and that newer communities not operate in an isolated manner when it comes to commons governance.

[e]Book escrow

Thursday, May 10th, 2012

I had no intention of writing yet another post about DRM today. But a new post on Boing Boing, Libraries set out to own their ebooks, has some of the same flavor as some of the posts I quoted yesterday and is a good departure (for making a few more points, and not writing any more about the topic for some time).

Today’s Boing Boing post (note their Day Against DRM post from last week) says a library in Colorado is:

buying eBooks directly from publishers and hosting them on its own platform. That platform is based on the purchase of content at discount; owning—not leasing—a copy of the file; the application of industry-standard DRM on the library’s files; multiple purchases based on demand; and a “click to buy” feature.

I think that’s exactly what Open Library is doing (maybe excepting “click to buy”; not sure what happened to “vending” mentioned when BookServer was announced). A letter to publishers from the library is fairly similar to the Internet Archive’s plea of a few days ago. Exceprt:

  • We will attach DRM when you want it. Again, the Adobe Content Server requires us to receive the file in the ePub format. If the file is “Creative Commons” and you do not require DRM, then we can offer it as a free download to as many people as want it. DRM is the default.
  • We will promote the title. Over 80% of our adult checkouts (and we checked out over 8.2 million items last year) are driven by displays. We will present e-content data (covers and descriptions) on large touch screens, computer catalogs, and a mobile application. These displays may be “built” by staff for special promotions (Westerns, Romances, Travel, etc.), automatically on the basis of use (highlighting popular titles), and automatically through a recommendation engine based on customer use and community reviews.
  • We will promote your company. See a sample press release, attached.

I did not realize libraries were so much like retail (see “driven by displays”). Disturbing, but mostly off-topic.

The letter lists two concerns, both financial. Now: give libraries discounts. Future: allow them to sell used copies. DRM is not a concern now, nor for the future. As I said a couple days ago, I appreciate the rationale for making such a deal. Librarian (and Wikimedian, etc) Phoebe Ayers explained it well almost exactly two years ago: benefit patrons (now). Ok. But this seems to me to fit what ought to be a canonical definition of non-visionary action: choosing to climb a local maximum which will be hard to climb down from, with higher peaks in full view. Sure, the trails are not known, but must exist. This “vision” aspect is one reason Internet Archive’s use of DRM is more puzzling than local libraries’ use.

Regarding “owning—not leasing—a copy of the file”, I now appreciate more a small part of the Internet Archive’s recent plea:

re-format for enduring access, and long term preservation

Are libraries actually getting books from publishers in formats ideal for these tasks? I doubt it, but if they are, that’s a very significant plus.

I dimly recall source code escrow being a hot topic in software around 25 years ago. (At which time I was reading industry rags…at my local library.) I don’t think it has been a hot topic for a long time, and I’d guess because the ability to run the software without a license manager, and to inspect, fix, and share the software right now, on demand, rather than as a failsafe mechanism, is a much, much better solution. Good thing lots of people and institutions over the last decades demanded the better solution.

DRM and BookServer/Internet Archive/Open Library commentary review

Wednesday, May 9th, 2012

After posting DRM and the Churches of Universal Access to All Knowledge’s strategic plans I noticed some other mentions of DRM and BookServer/Internet Archive/Open Library. I’m dropping them here with a little bit of added commentary.

First there’s my microcarping at the launch event (2009-10-29, over 2.5 years ago). Fran Toolan blogged about the event and had a very different reaction:

The last demonstration was not a new one to me, but Raj came back on and he and Brewster demonstrated how using the Adobe ACS4 server technology, digital books can be borrowed, and protected from being over borrowed from libraries everywhere. First Brewster demonstrated the borrowing process, and then Raj tried to borrow the same book but found he couldn’t because it was already checked out. In a tip of the hat to Sony, Brewster then downloaded his borrowed text to his Sony Reader. This model protects the practice of libraries buying copies of books from publishers, and only loaning out what they have to loan. (Contrary to many publishers fears that it’s too easy to “loan” unlimited copies of e-Books from libraries).

As you’ll see (and saw in the screenshot last post) a common approach is to state that some Adobe “technology” or “software” is involved, but not say DRM.

A CNET story covering the announcement doesn’t even hint at DRM, but it does have a quote from Internet Archive founder Brewster Kahle that gives some insight into why they’re taking the approach they have (in line with what I said previous post, and see accompanying picture there):

“We’ve now gotten universal access to free (content),” Kahle added. “Now it’s time to get universal access to all knowledge, and not all of this will be free.”

A report from David Rothman missed the DRM entirely, but understands it lurks at least as an issue:

There’s also the pesky DRM question. Will the master searcher provide detailed rights information, and what if publishers insist on DRM, which is anathema to Brewster? How to handle server-dependent DRM, or will such file be hosted on publisher sites?

Apparently it isn’t, and Adobe technology to the rescue!

Nancy Herther noted DRM:

Kahle and his associates are approaching this from the perspective of creating standards and processes acceptable to all stakeholders-and that includes fair attention to digital rights management issues (DRM). […] IA’s focus is more on developing a neutral platform acceptable to all key parties and less on mapping out the digitization of the world’s books and hoping the DRM issues resolve themselves.

The first chagrined mention of DRM that I could find came over 8 months later from Petter Næss:

Quotable: “I figure libraries are one of the major pillars of civilization, and in almost every case what librarians want is what they should get” (Stewart Brand)

Bit strange to hear Brand waxing so charitable about about a system that uses DRM, given his EFF credentials, but so it goes.

2011-01-09 maiki wrote that a book page on the Open Library site claimed that “Adobe ePUB Book Rights” do not permit “reading aloud” (conjure a DRM helmet with full mask to make that literally true). I can’t replicate that screen (capture at the link). Did Open Library provide more up-front information then than it does now?

2011-03-18 waltguy posted the most critical piece I’ve seen, but closes granting the possibility of good strategy:

It looks very much like the very controlled lending model imposed by publishers on libraries. Not only does the DRM software guard against unauthorized duplication. But the one user at a time restriction means that libraries have to spend more money for additional licences to serve multiple patrons simultaneously. Just like they would have to buy more print copies if they wanted to do that.

[…]

But then why would the Open Library want to adopt such a backward-looking model for their foray into facilitating library lending of ebooks ? They do mention some advantages of scale that may benefit the nostly public libraries that have joined.

[…]

However, even give the restrictions, it may be a very smart attempt to create an open-source motivated presence in the commercial-publisher-dominated field of copyrighted ebooks distribution. Better to be part of the game to be able to influence it’s future direction, even if you look stodgy.

2011-04-15 Nate Hoffelder noted concerning a recent addition to OpenLibrary:

eBooks can be checked out from The Open Library for a period of 2 weeks. Unfortunately, this means that Smashwords eBooks now have DRM. It’s built into the system that the Open Library licensed from Overdrive, the digital library service.

In a comment, George Oates from Open Library clarified:

Hello. We thought it might be worth correcting this statement. We haven’t licensed anything from Overdrive. When you borrow a book from the Open Library lending library, there are 3 ways you can consume the book:

1) Using our BookReader software, right in the browser, nothing to download,
2) As a PDF, which does require installing the Adobe Digital Editions (ADE) software, to manage the loan (and yes, DRM), or
3) As an ePub, which also requires consumption of the book within ADE.

Just wanted to clarify that there is no licensing relationship with Overdrive, though Overdrive also manages loans using ADE. (And, if we don’t have the book available to borrow through Open Library, we link through to the Overdrive system where we know an Overdrive identifier, and so can construct a link into overdrive.com.)

This is the first use of the term “DRM” by an Internet Archive/Open Library person in connection with the service that I’ve seen (though I’d be very surprised if it was actually the first).

2011-05-04 and again 2012-02-05 Sarah Houghton mentions Open Library very favorably in posts lambasting DRM. I agree that DRM is negative and Open Library positive, but find it just a bit odd in such a post to promote a “better model” that…also uses DRM. (Granted, not every post needs to state all relevant caveats.)

2011-06-25 the Internet Archive made an announcement about expanding OpenLibrary book lending:

Any OpenLibrary.org account holder can borrow up to 5 eBooks at a time, for up to 2 weeks. Books can only be borrowed by one person at a time. People can choose to borrow either an in-browser version (viewed using the Internet Archive’s BookReader web application), or a PDF or ePub version, managed by the free Adobe Digital Editions software. This new technology follows the lead of the Google eBookstore, which sells books from many publishers to be read using Google’s books-in-browsers technology. Readers can use laptops, library computers and tablet devices, including the iPad.

blogged about the announcement, using the three characters:

The open Library functions in much the same way as OverDrive. Library patrons can check out up to 5 titles at a time for a period of 2 weeks. The ebooks can be read online or on any Device or app that supports Adobe DE DRM.

2011-07-05 a public library in Kentucky posted:

The Open Library is a digital library with an enormous ammount of DRM free digital books. The books are multiple formats, ranging from PDF to plain text for the Dial-up users out there. We hope you check them out!

That’s all true, Open Library does have an enormous amount of DRM-free digital books. And a number of restricted ones.

2011-08-13 Vic Richardson posted an as far as I can tell accurate description for general readers.

Yesterday (2012-05-08) Peter Brantley of the Internet Archive answered a question about how library ebook purchases differ from individual purchases. I’ll just quote the whole thing:

Karen, this is a good question. Because ebooks are digital files, they need to be hosted somewhere in order to be made available to individuals. When you buy from Amazon, they are hosting the file for the publisher, and permit its download when you purchase it. For a library to support borrowing, it has to have the ebook file hosted on its behalf, as most libraries lack deep technical expertise; traditionally this is done by a service provider such as Overdrive. What the Internet Archive, Califa (California public library consortium), and Douglas County, Colorado are trying to do is host those files directly for their patrons. To do that, we need to get the files direct from the publisher or their intermediary distributor — in essence, we are playing the role of Amazon or Barnes & Noble, except that as a library we want people to be able to borrow for free. This sounds complicated, and it is, but then we have to introduce DRM, which is a technical protection measure that a library ebook provider has to implement in order to assure publishers that they are not risking an unacceptable loss of sales. DRM complicates the user experience considerably.

My closing comment-or-so: Keep in mind that it is difficult for libraries to purchase restricted copies when digesting good news about a publisher planning to drop DRM. The death of DRM would be good news indeed, but inevitable (for books)? I doubt it. My sense is that each step forward against DRM has been matched by two (often silent) steps back.

DRM and the Churches of Universal Access to All Knowledge’s strategic plans

Friday, May 4th, 2012

img_1825.jpg

Over 2.5 years ago (2009-10-19) the Internet Archive celebrated its move into a former church (I know it’s a cheap shot, but my immediate reaction was “yay, monument to ignorance made into a monument to knowledge; more like that please (if we must have monuments)!”) and to launch BookServer. The latter was described as “like the web, but for books” illustrated with a slide featuring a cloud in the middle surrounded by icons representing various devices and actors (see the same or similar image at the previous link). I was somewhat perplexed — if a less credible entity had described their project as “like the web, but for Foo” as illustrated by a picture of a cloud labeled “FooServer”, by bullshit alarm would’ve been going crazy.

For the remainder of the event a parade of people associated in some way with books endorsed the project on stage. I only remember a few of them. One was Adam Hyde, who recently drafted a book called A Webpage is a Book. Somewhere in the middle of this parade someone stood out — tall and slick, salesperson slick — and gave a spiel about how Adobe was excited about BookServer and using technology to maximize getting content to consumers. In any case, it was obvious from what the Adobe person said that BookServer, whatever it was, would be using DRM. I nearly fell out of my seat, but I don’t think anyone else noticed — everyone just clapped, same as for all other endorsers — and the crowd was filled with people who ought to have understood and been alarmed.

Over the past couple years I occasionally wondered what became of BookServer and its use of DRM, but was reminded to look by Mako Hill’s post in March concerning how it often isn’t made clear whether a particular offer is made with DRM. I didn’t see anything on the Internet Archive site, but a few days ago Peter Brantley’s writeup of a Digital Public Library of America meeting included:

Kahle announced his desire to broaden access to 20th Century literature, much of it still in copyright, by digitizing library collections and making them available for a 1-copy/1-user borrowing system, such as that provided by the Internet Archive’s Open Library, in concert with State libraries.

Right, OpenLibrary in addition to book metadata (“one web page for every book”; do we obtain recursion if we take Hyde literally? a mere curiosity, as we probably shouldn’t) now offers downloading, reading, and borrowing in various combinations for some books. Downloading includes the obvious formats. Reading is via the excellent web-based Internet Archive BookReader, and is available for books that may be downloaded as well as a borrowing option. In the borrowing case, only one person at a time may read a particular book on the OpenLibrary site. The other digital borrowing option is where DRM comes in — Adobe Digital Editions is required. (This is for books that can be borrowed via OpenLibrary; some may be borrowed digitally from traditional libraries via OverDrive, which probably also uses DRM.)

This and screens leading up to this are clear to me, but I don’t know about most people. That there’s DRM involved is just not deemed to be pertinent; some particular software is needed, that’s all. For myself, the biggest improvement not involving a big policy change would be to split up the current “Show only eBooks” search option. Maybe “Show only downloadable eBooks”.

img_1823.jpg

OpenLibrary is looking to expand its ebook “lending” offerings according to a post made just two days ago, We want to buy your books! Internet Archive Letter to Publishers:

We currently buy, lend, and preserve eBooks from publishers and booksellers, but we have not found many eBooks for sale at any price. The Internet Archive is running standard protection systems to lend eBooks from our servers through our websites, openlibrary.org and archive.org. In this way, we strive to provide a seamless experience for our library patrons that replicates a traditional library check-out model, but now with eReaders and searching.

By buying eBooks from you, we hope to continue the productive relationship between libraries and publishers. By respecting the rights and responsibilities that have evolved in the physical era, we believe we will all know how to act: one patron at a time, restrictions on copying, re-format for enduring access, and long term preservation.

Rather than begging to buy books with restrictions, I’d prefer the Internet Archive, and indeed everyone, to demand books without restrictions, software or legal (of course they’re mixed given current malgovernance — anticircumvention laws). But that’s a different strategy, possibly requiring a lower discount rate. I can appreciate the Internet Archive’s dedication to being a library, and getting its patrons — everyone — access to knowledge, right now.

Still, it would be nice if libraries were to participate (even more, I know many librarians do) in anti-DRM activism, such as a Day Against DRM, which is today. Also see my Day Against DRM post from last year.

Speaking of different strategies, Creative Commons licenses so far include a regulatory clause prohibiting distribution with DRM. Some people have been dissatisfied with this clause since the beginning, and it is again being debated for version 4.0 of the licenses. I still don’t think the effectiveness (in promoting the desired outcome, a more free world; enforcement, enforceability, etc, all ought be subsidiary) of the options has really been discussed, though I did try:

I suspect that anyone who has or will bother to participate in discussions about CC and DRM is a bitter opponent of DRM (I can say this with certainty about most of the participants so far). My guess is that the disagreement comes from not one or other set of people hating or misunderstanding freedom or accepting DRM, but from different estimations of the outcomes of different strategies.

Keeping or strengthening the DRM prohibition fights DRM by putting DRM-using platforms at a disadvantage (probably not significant now, but could become substantial if more CC-licensed works become culturally central and significant enforcement efforts commence) and by putting CC’s reputation unambiguously against DRM, making the license an expression of the world we aspire to live in, and giving policy advocates a talking point against mandating DRM anywhere (“it breaks this massive pool of content”).

Weakening through parallel distribution or removing altogether the DRM prohibition fights DRM indirectly, by removing a barrier (probably small now, given widespread non-compliance) to CC-licensed works becoming culturally central (ie popular) and thus putting DRM-using platforms at a disadvantage – the defect being useless to gain access to content, thus being merely a defect.

Personally, I find the second more compelling, but I admit it is simply the sort of story that usually appeals to me. Also, I find it congruent with the conventional wisdom a broad “we” tell to people who just don’t get it, supposedly: obscurity is a bigger threat than piracy. But I don’t expect anyone to change their minds as a result. Especially since this is in concept more or less what Evan Prodromou was saying in 2006 http://evan.prodromou.name/Free_content_and_DRM :-)

I do think that expression is important, and whatever gets baked into 4.0, CC could do more in a couple ways:

1. Communicate the DRM prohibition especially on license deeds (where applicable, at least in < =3.0); suggested by Luis Villa in http://lists.ibiblio.org/pipermail/cc-licenses/2012-January/006663.html 2. Make anti-DRM advocacy a bigger part of CC's overall message; a bit at http://creativecommons.org/tag/drm but IIRC something like Day Against DRM has never been featured on the home page.

Day Against DRM is featured on the CC home page today.

The world has summarily discarded vast systems of restrictions on the labor mobility of medieval serfs, slaves, women, South African blacks, indigenous Australians, and a long list of others.

Wednesday, May 2nd, 2012

I highly recommend the paper Economics and Emigration: Trillion-Dollar Bills on the Sidewalk? (pdf, summary) by Michael Clemens as well as a companion materials (mp3 interview).

Clemens surveys the small (four studies; I think I’d only heard of one of them) literature that has estimated the gains from removing all barriers to international migration. The estimates range from 67% to 147% of global product! Compare with summing high and low estimates for removing all barriers to international trade and investment: between 0.4% and 5.8% of global product. Yet the amount of attention given to these topics by economists is the inverse, and mostly from the immigration, rather than emigration side of the coin. At best a case of chasing easy precision over oomph (Clemens speculates lack of study could be due to obviousness, mercantilist/nationalist tradition, and lack of data).

I was happy to see mention of historical examples:

Of course, these elasticities could be different at much higher levels of emigration. The literature gives no clear support for such a pattern, however, even under greatly increased migration. In historical cases of large reductions in barriers to labor mobility between high-income and low-income populations or regions, those with high wages have not experienced a large decline. For example, wages of whites in South Africa have not shown important declines since the end of the apartheid regime (Leibbrandt and Levinsohn, 2011), despite the total removal of very large barriers to the physical movement and occupational choice of a poor population that outnumbered the rich population six to one. The recent advent of unlimited labor mobility between some Eastern European countries and Great Britain, though accompanied by large and sudden migration flows, has not caused important declines in British wages (Blanchflower and Shadforth, 2009).

“Brain drain” used an excuse for apartheid (it’s good for them!) makes me sad, but gladly the literature does not offer support for the effect, as I suspected. There’s a passing mention in the paper, and a bit more in the interview, concerning emigration from Sweden — Clemens says 1/3rd of the population left. The two citations in the linked Wikipedia article claim 20% and 33%, but probably cover different time periods. I’d like to see a comparison of annual emigration rates for various geographies at various times. Clemens also says that one can read anti-immigrant statements in U.S. newspapers a 100+ years ago that mirror those of today.

A couple other quotes from the paper:

economists should be open to the possibility that dramatic changes in what is practical can happen over several decades. After all, changes in geographic labor mobility that were unthinkable only a few decades ago have come to pass. Through the 1980s, a Polish national attempting to emigrate to West Germany could be shot by soldiers sealing the Inner German border from the east. Today, Polish jobseekers may move freely throughout Germany. The world has summarily discarded vast systems of restrictions on the labor mobility of medieval serfs, slaves, women, South African blacks, indigenous Australians, and a long list of others.

[…]

These initial results accord well with an entirely separate macroeconomic literature (for example, Hall and Jones 1999) which finds that most of the productivity gap between rich and poor countries is accounted for by place-specific total factor productivity, not by productivity differences inherent to workers. Large differences in location-specific total factor productivity mean that free movement of goods and capital cannot by themselves achieve the global equalization of wages, as they can in the most abstract trade models (O’Rourke and Sinott, 2004; Freeman, 2006, Kremer, 2006).

Place-specific total factor productivity can increase, and people in all places should strive to do so (best autonomously) — that’s approximately what “development” is about — results are very, very mixed. I wonder if various “open” things can’t help more than they do now, and will write about such eventually, but it’d be on the margin. And international apartheid is an abomination that should be eliminated immediately regardless of the long-term substitutability of development and migration.

The economics profession of the 20th century has taken a pass on migration, as they have on IP, with even more tragic results. Please change that! As his interviewer says, Clemens’ paper sketches a research program good for many Ph.D. theses.

2004 Mayday Mayday Mayday

Tuesday, May 1st, 2012

Only one post 8 years ago to the month to refute: noting the announcement of the availability of Creative Commons 2.0 Licenses. In addition to and perhaps in part due to its hastiness, every change introduced in 2.0 was questionable, but I will only bother addressing one here.

ShareAlike 1.0 (SA) was not versioned as a result of all non-Attribution licenses being dropped. Relatively few people chose non-Attribution licenses, and this significantly simplified the license suite, reducing the number of classes of “CC licenses” from 11 to 6, and the number of incompatible pools going forward, from 8 to 4 (NC-ND, NC-SA, ND, and SA were each incompatible with any other license), and works under all remaining free licenses published by CC (BY and BY-SA) constituted a single compatible pool (though incompatible with free licenses that existed prior to CC, but that is another line of criticism for another time; the worst that can be said about 2.0 is that it did nothing to address this problem introduced with 1.0).

The loss of SA has been mourned throughout the past 8 years, not by many people, but by unusually well informed and intentioned people. I’ve defended its loss many times, giving the above reasons, especially the last, and stating that one can waive the attribution condition if one wants to. But:

  • The rationale I’ve emphasized is weak. SA 2.0 simply could’ve permitted adaptations licensed under itself or BY-SA.
  • It isn’t clear how one is supposed to communicate effectively that one has waived the attribution condition.
  • SA was special. To my knowledge, the nearest any copyleft license has come to purely neutralizing copyright, almost sans regulatory conditions.