Post Books

Future of Intellectual Protectionism and not much Innovation Policy

Wednesday, May 23rd, 2012

I read all of the pieces selected for a „Future of copyright” anthology resulting from a contest run by the Modern Poland Foundation (apparently the winner of a small cash prize will be announced tomorrow; I highly recommend all of the pieces below and commend the judges for their selections):

7 are fiction (the 3 exceptions are me, Spitzlinger, and Togi). 5 of these are dystopian (exceptions: Binns, Mansoux), 4 of which (exception: Żyła) involve some kind of fundamental loss of personal control as a result of intellectual protectionism (even more fundamental than drug war style enforcement involves, which Żyła’s does concern). 3 of these (exception: Eddie) involve extrapolations of DRM, 2 of which (exception: Melin) involve DRM implants.

I’d like to see versions of the dystopian stories written as IP propaganda, e.g., recast as RIAA/MPAA pieces from the future (several of the stories have funnily named future enforcement organizations in that vein). Such could be written as satire, apology, or even IP totalist advocacy (utopian rather than dystopian).

Of the dystopian stories, Solís is probably most dystopian, Eddie most humorous, and Betteridge overall best executed. Żyła needs a bit of development — the trend posited is incongruous and unexplained — but maybe due to an unknown factor to be suggested by fictional future freakonomics, or perhaps I just missed it. Melin ends with some hope, but annoys me for contemporary reasons — why would the recipient of a body part artificially grown with “open” methods be constrained in the disposition of that part by a “Creative Commons license” on those methods? Another reason to discourage use of CC licenses for hardware design.

The two non-dystopian stories take the form of a “letter from the future” in which various “open” movements and “models” win (Binns; if I had to bet on a winner of the contest, I’d put my money on this one) and an allegory for the history and projected future of copyright (Mansoux; probably the piece I enjoyed reading most).

Of the 3 non-fiction pieces, Togi is most non-standard — a rant in the form of lemmas — and fun, though briefly goes off the rails in asserting that “those entities which represent the greatest tax gain will be preferred by government.” If that were the case, all that is prohibited would instead be taxed. Statements about “revenue” leave little wiggle room, but I suppose a charitable interpretation would include in “tax gain” all rents to those influencing power, be they bootleggers, baptists, or those directly obtaining tax revenue. Spitzlinger accepts the stories my piece rejects and suggests something like the Creative Commons Attribution-NonCommercial-ShareAlike license be the default for new works, with the possibility of additional temporary restriction (a one-year usufruct, perhaps?).

All of the pieces evince unhappiness with the current direction of information governance. Of those that reveal anything about where they stand on the reform spectrum (admitting that one dimension makes for an impoverished description of reform possibilities; that’s one of the points I hoped to communicate in my piece) I’d place Binns, Melin, and Spitzlinger away from abolition, and me, Mansoux, and Togi toward abolition.

I expect the contest and anthology to be criticized for only representing reform viewpoints. Sadly, no maximalist pieces were submitted. The most moderate reform submission didn’t follow contest rules (not a new piece, no license offered). More than alternate perspective versions of IP dystopias, I’d like to see attempts to imagine future systems which increase private returns to innovation, perhaps looking nothing like today’s copyright, patent, etc., and increase overall social welfare — I’m dubious, but please try.

Update 20120524: The two most fun and non-standard entries wonMansoux, with an honorable mention to Togi. I now must also congratulate the judges on their good taste. Read those two, or the whole anthology (pdf).

[e]Book escrow

Thursday, May 10th, 2012

I had no intention of writing yet another post about DRM today. But a new post on Boing Boing, Libraries set out to own their ebooks, has some of the same flavor as some of the posts I quoted yesterday and is a good departure (for making a few more points, and not writing any more about the topic for some time).

Today’s Boing Boing post (note their Day Against DRM post from last week) says a library in Colorado is:

buying eBooks directly from publishers and hosting them on its own platform. That platform is based on the purchase of content at discount; owning—not leasing—a copy of the file; the application of industry-standard DRM on the library’s files; multiple purchases based on demand; and a “click to buy” feature.

I think that’s exactly what Open Library is doing (maybe excepting “click to buy”; not sure what happened to “vending” mentioned when BookServer was announced). A letter to publishers from the library is fairly similar to the Internet Archive’s plea of a few days ago. Exceprt:

  • We will attach DRM when you want it. Again, the Adobe Content Server requires us to receive the file in the ePub format. If the file is “Creative Commons” and you do not require DRM, then we can offer it as a free download to as many people as want it. DRM is the default.
  • We will promote the title. Over 80% of our adult checkouts (and we checked out over 8.2 million items last year) are driven by displays. We will present e-content data (covers and descriptions) on large touch screens, computer catalogs, and a mobile application. These displays may be “built” by staff for special promotions (Westerns, Romances, Travel, etc.), automatically on the basis of use (highlighting popular titles), and automatically through a recommendation engine based on customer use and community reviews.
  • We will promote your company. See a sample press release, attached.

I did not realize libraries were so much like retail (see “driven by displays”). Disturbing, but mostly off-topic.

The letter lists two concerns, both financial. Now: give libraries discounts. Future: allow them to sell used copies. DRM is not a concern now, nor for the future. As I said a couple days ago, I appreciate the rationale for making such a deal. Librarian (and Wikimedian, etc) Phoebe Ayers explained it well almost exactly two years ago: benefit patrons (now). Ok. But this seems to me to fit what ought to be a canonical definition of non-visionary action: choosing to climb a local maximum which will be hard to climb down from, with higher peaks in full view. Sure, the trails are not known, but must exist. This “vision” aspect is one reason Internet Archive’s use of DRM is more puzzling than local libraries’ use.

Regarding “owning—not leasing—a copy of the file”, I now appreciate more a small part of the Internet Archive’s recent plea:

re-format for enduring access, and long term preservation

Are libraries actually getting books from publishers in formats ideal for these tasks? I doubt it, but if they are, that’s a very significant plus.

I dimly recall source code escrow being a hot topic in software around 25 years ago. (At which time I was reading industry rags…at my local library.) I don’t think it has been a hot topic for a long time, and I’d guess because the ability to run the software without a license manager, and to inspect, fix, and share the software right now, on demand, rather than as a failsafe mechanism, is a much, much better solution. Good thing lots of people and institutions over the last decades demanded the better solution.

DRM and BookServer/Internet Archive/Open Library commentary review

Wednesday, May 9th, 2012

After posting DRM and the Churches of Universal Access to All Knowledge’s strategic plans I noticed some other mentions of DRM and BookServer/Internet Archive/Open Library. I’m dropping them here with a little bit of added commentary.

First there’s my microcarping at the launch event (2009-10-29, over 2.5 years ago). Fran Toolan blogged about the event and had a very different reaction:

The last demonstration was not a new one to me, but Raj came back on and he and Brewster demonstrated how using the Adobe ACS4 server technology, digital books can be borrowed, and protected from being over borrowed from libraries everywhere. First Brewster demonstrated the borrowing process, and then Raj tried to borrow the same book but found he couldn’t because it was already checked out. In a tip of the hat to Sony, Brewster then downloaded his borrowed text to his Sony Reader. This model protects the practice of libraries buying copies of books from publishers, and only loaning out what they have to loan. (Contrary to many publishers fears that it’s too easy to “loan” unlimited copies of e-Books from libraries).

As you’ll see (and saw in the screenshot last post) a common approach is to state that some Adobe “technology” or “software” is involved, but not say DRM.

A CNET story covering the announcement doesn’t even hint at DRM, but it does have a quote from Internet Archive founder Brewster Kahle that gives some insight into why they’re taking the approach they have (in line with what I said previous post, and see accompanying picture there):

“We’ve now gotten universal access to free (content),” Kahle added. “Now it’s time to get universal access to all knowledge, and not all of this will be free.”

A report from David Rothman missed the DRM entirely, but understands it lurks at least as an issue:

There’s also the pesky DRM question. Will the master searcher provide detailed rights information, and what if publishers insist on DRM, which is anathema to Brewster? How to handle server-dependent DRM, or will such file be hosted on publisher sites?

Apparently it isn’t, and Adobe technology to the rescue!

Nancy Herther noted DRM:

Kahle and his associates are approaching this from the perspective of creating standards and processes acceptable to all stakeholders-and that includes fair attention to digital rights management issues (DRM). […] IA’s focus is more on developing a neutral platform acceptable to all key parties and less on mapping out the digitization of the world’s books and hoping the DRM issues resolve themselves.

The first chagrined mention of DRM that I could find came over 8 months later from Petter Næss:

Quotable: “I figure libraries are one of the major pillars of civilization, and in almost every case what librarians want is what they should get” (Stewart Brand)

Bit strange to hear Brand waxing so charitable about about a system that uses DRM, given his EFF credentials, but so it goes.

2011-01-09 maiki wrote that a book page on the Open Library site claimed that “Adobe ePUB Book Rights” do not permit “reading aloud” (conjure a DRM helmet with full mask to make that literally true). I can’t replicate that screen (capture at the link). Did Open Library provide more up-front information then than it does now?

2011-03-18 waltguy posted the most critical piece I’ve seen, but closes granting the possibility of good strategy:

It looks very much like the very controlled lending model imposed by publishers on libraries. Not only does the DRM software guard against unauthorized duplication. But the one user at a time restriction means that libraries have to spend more money for additional licences to serve multiple patrons simultaneously. Just like they would have to buy more print copies if they wanted to do that.

[…]

But then why would the Open Library want to adopt such a backward-looking model for their foray into facilitating library lending of ebooks ? They do mention some advantages of scale that may benefit the nostly public libraries that have joined.

[…]

However, even give the restrictions, it may be a very smart attempt to create an open-source motivated presence in the commercial-publisher-dominated field of copyrighted ebooks distribution. Better to be part of the game to be able to influence it’s future direction, even if you look stodgy.

2011-04-15 Nate Hoffelder noted concerning a recent addition to OpenLibrary:

eBooks can be checked out from The Open Library for a period of 2 weeks. Unfortunately, this means that Smashwords eBooks now have DRM. It’s built into the system that the Open Library licensed from Overdrive, the digital library service.

In a comment, George Oates from Open Library clarified:

Hello. We thought it might be worth correcting this statement. We haven’t licensed anything from Overdrive. When you borrow a book from the Open Library lending library, there are 3 ways you can consume the book:

1) Using our BookReader software, right in the browser, nothing to download,
2) As a PDF, which does require installing the Adobe Digital Editions (ADE) software, to manage the loan (and yes, DRM), or
3) As an ePub, which also requires consumption of the book within ADE.

Just wanted to clarify that there is no licensing relationship with Overdrive, though Overdrive also manages loans using ADE. (And, if we don’t have the book available to borrow through Open Library, we link through to the Overdrive system where we know an Overdrive identifier, and so can construct a link into overdrive.com.)

This is the first use of the term “DRM” by an Internet Archive/Open Library person in connection with the service that I’ve seen (though I’d be very surprised if it was actually the first).

2011-05-04 and again 2012-02-05 Sarah Houghton mentions Open Library very favorably in posts lambasting DRM. I agree that DRM is negative and Open Library positive, but find it just a bit odd in such a post to promote a “better model” that…also uses DRM. (Granted, not every post needs to state all relevant caveats.)

2011-06-25 the Internet Archive made an announcement about expanding OpenLibrary book lending:

Any OpenLibrary.org account holder can borrow up to 5 eBooks at a time, for up to 2 weeks. Books can only be borrowed by one person at a time. People can choose to borrow either an in-browser version (viewed using the Internet Archive’s BookReader web application), or a PDF or ePub version, managed by the free Adobe Digital Editions software. This new technology follows the lead of the Google eBookstore, which sells books from many publishers to be read using Google’s books-in-browsers technology. Readers can use laptops, library computers and tablet devices, including the iPad.

blogged about the announcement, using the three characters:

The open Library functions in much the same way as OverDrive. Library patrons can check out up to 5 titles at a time for a period of 2 weeks. The ebooks can be read online or on any Device or app that supports Adobe DE DRM.

2011-07-05 a public library in Kentucky posted:

The Open Library is a digital library with an enormous ammount of DRM free digital books. The books are multiple formats, ranging from PDF to plain text for the Dial-up users out there. We hope you check them out!

That’s all true, Open Library does have an enormous amount of DRM-free digital books. And a number of restricted ones.

2011-08-13 Vic Richardson posted an as far as I can tell accurate description for general readers.

Yesterday (2012-05-08) Peter Brantley of the Internet Archive answered a question about how library ebook purchases differ from individual purchases. I’ll just quote the whole thing:

Karen, this is a good question. Because ebooks are digital files, they need to be hosted somewhere in order to be made available to individuals. When you buy from Amazon, they are hosting the file for the publisher, and permit its download when you purchase it. For a library to support borrowing, it has to have the ebook file hosted on its behalf, as most libraries lack deep technical expertise; traditionally this is done by a service provider such as Overdrive. What the Internet Archive, Califa (California public library consortium), and Douglas County, Colorado are trying to do is host those files directly for their patrons. To do that, we need to get the files direct from the publisher or their intermediary distributor — in essence, we are playing the role of Amazon or Barnes & Noble, except that as a library we want people to be able to borrow for free. This sounds complicated, and it is, but then we have to introduce DRM, which is a technical protection measure that a library ebook provider has to implement in order to assure publishers that they are not risking an unacceptable loss of sales. DRM complicates the user experience considerably.

My closing comment-or-so: Keep in mind that it is difficult for libraries to purchase restricted copies when digesting good news about a publisher planning to drop DRM. The death of DRM would be good news indeed, but inevitable (for books)? I doubt it. My sense is that each step forward against DRM has been matched by two (often silent) steps back.

DRM and the Churches of Universal Access to All Knowledge’s strategic plans

Friday, May 4th, 2012

img_1825.jpg

Over 2.5 years ago (2009-10-19) the Internet Archive celebrated its move into a former church (I know it’s a cheap shot, but my immediate reaction was “yay, monument to ignorance made into a monument to knowledge; more like that please (if we must have monuments)!”) and to launch BookServer. The latter was described as “like the web, but for books” illustrated with a slide featuring a cloud in the middle surrounded by icons representing various devices and actors (see the same or similar image at the previous link). I was somewhat perplexed — if a less credible entity had described their project as “like the web, but for Foo” as illustrated by a picture of a cloud labeled “FooServer”, by bullshit alarm would’ve been going crazy.

For the remainder of the event a parade of people associated in some way with books endorsed the project on stage. I only remember a few of them. One was Adam Hyde, who recently drafted a book called A Webpage is a Book. Somewhere in the middle of this parade someone stood out — tall and slick, salesperson slick — and gave a spiel about how Adobe was excited about BookServer and using technology to maximize getting content to consumers. In any case, it was obvious from what the Adobe person said that BookServer, whatever it was, would be using DRM. I nearly fell out of my seat, but I don’t think anyone else noticed — everyone just clapped, same as for all other endorsers — and the crowd was filled with people who ought to have understood and been alarmed.

Over the past couple years I occasionally wondered what became of BookServer and its use of DRM, but was reminded to look by Mako Hill’s post in March concerning how it often isn’t made clear whether a particular offer is made with DRM. I didn’t see anything on the Internet Archive site, but a few days ago Peter Brantley’s writeup of a Digital Public Library of America meeting included:

Kahle announced his desire to broaden access to 20th Century literature, much of it still in copyright, by digitizing library collections and making them available for a 1-copy/1-user borrowing system, such as that provided by the Internet Archive’s Open Library, in concert with State libraries.

Right, OpenLibrary in addition to book metadata (“one web page for every book”; do we obtain recursion if we take Hyde literally? a mere curiosity, as we probably shouldn’t) now offers downloading, reading, and borrowing in various combinations for some books. Downloading includes the obvious formats. Reading is via the excellent web-based Internet Archive BookReader, and is available for books that may be downloaded as well as a borrowing option. In the borrowing case, only one person at a time may read a particular book on the OpenLibrary site. The other digital borrowing option is where DRM comes in — Adobe Digital Editions is required. (This is for books that can be borrowed via OpenLibrary; some may be borrowed digitally from traditional libraries via OverDrive, which probably also uses DRM.)

This and screens leading up to this are clear to me, but I don’t know about most people. That there’s DRM involved is just not deemed to be pertinent; some particular software is needed, that’s all. For myself, the biggest improvement not involving a big policy change would be to split up the current “Show only eBooks” search option. Maybe “Show only downloadable eBooks”.

img_1823.jpg

OpenLibrary is looking to expand its ebook “lending” offerings according to a post made just two days ago, We want to buy your books! Internet Archive Letter to Publishers:

We currently buy, lend, and preserve eBooks from publishers and booksellers, but we have not found many eBooks for sale at any price. The Internet Archive is running standard protection systems to lend eBooks from our servers through our websites, openlibrary.org and archive.org. In this way, we strive to provide a seamless experience for our library patrons that replicates a traditional library check-out model, but now with eReaders and searching.

By buying eBooks from you, we hope to continue the productive relationship between libraries and publishers. By respecting the rights and responsibilities that have evolved in the physical era, we believe we will all know how to act: one patron at a time, restrictions on copying, re-format for enduring access, and long term preservation.

Rather than begging to buy books with restrictions, I’d prefer the Internet Archive, and indeed everyone, to demand books without restrictions, software or legal (of course they’re mixed given current malgovernance — anticircumvention laws). But that’s a different strategy, possibly requiring a lower discount rate. I can appreciate the Internet Archive’s dedication to being a library, and getting its patrons — everyone — access to knowledge, right now.

Still, it would be nice if libraries were to participate (even more, I know many librarians do) in anti-DRM activism, such as a Day Against DRM, which is today. Also see my Day Against DRM post from last year.

Speaking of different strategies, Creative Commons licenses so far include a regulatory clause prohibiting distribution with DRM. Some people have been dissatisfied with this clause since the beginning, and it is again being debated for version 4.0 of the licenses. I still don’t think the effectiveness (in promoting the desired outcome, a more free world; enforcement, enforceability, etc, all ought be subsidiary) of the options has really been discussed, though I did try:

I suspect that anyone who has or will bother to participate in discussions about CC and DRM is a bitter opponent of DRM (I can say this with certainty about most of the participants so far). My guess is that the disagreement comes from not one or other set of people hating or misunderstanding freedom or accepting DRM, but from different estimations of the outcomes of different strategies.

Keeping or strengthening the DRM prohibition fights DRM by putting DRM-using platforms at a disadvantage (probably not significant now, but could become substantial if more CC-licensed works become culturally central and significant enforcement efforts commence) and by putting CC’s reputation unambiguously against DRM, making the license an expression of the world we aspire to live in, and giving policy advocates a talking point against mandating DRM anywhere (“it breaks this massive pool of content”).

Weakening through parallel distribution or removing altogether the DRM prohibition fights DRM indirectly, by removing a barrier (probably small now, given widespread non-compliance) to CC-licensed works becoming culturally central (ie popular) and thus putting DRM-using platforms at a disadvantage – the defect being useless to gain access to content, thus being merely a defect.

Personally, I find the second more compelling, but I admit it is simply the sort of story that usually appeals to me. Also, I find it congruent with the conventional wisdom a broad “we” tell to people who just don’t get it, supposedly: obscurity is a bigger threat than piracy. But I don’t expect anyone to change their minds as a result. Especially since this is in concept more or less what Evan Prodromou was saying in 2006 http://evan.prodromou.name/Free_content_and_DRM :-)

I do think that expression is important, and whatever gets baked into 4.0, CC could do more in a couple ways:

1. Communicate the DRM prohibition especially on license deeds (where applicable, at least in < =3.0); suggested by Luis Villa in http://lists.ibiblio.org/pipermail/cc-licenses/2012-January/006663.html 2. Make anti-DRM advocacy a bigger part of CC's overall message; a bit at http://creativecommons.org/tag/drm but IIRC something like Day Against DRM has never been featured on the home page.

Day Against DRM is featured on the CC home page today.

How to be a democrat

Tuesday, January 3rd, 2012

How to be a dictator isn’t just about politics — or rather it is about politics, everywhere: “It doesn’t matter whether you are a dictator, a democratic leader, head of a charity or a sports organisation, the same things go on.”

The article ends with:

Dictators already know how to be dictators—they are very good at it. We want to point out how they do it so that it’s possible to think about reforms that can actually have meaningful consequences.

I don’t know what if any reforms the authors propose in their book, The Dictator’s Handbook: Why Bad Behavior is Almost Always Good Politics, but good on them encouraging a thinking in terms of meaningful consequences.

I see no hope for consequential progress against dictatorship in the United States. In 2007 I scored Obama and Biden very highly on their responses to a survey on executive power. Despite this, once in power, their administration has been a disaster, as Glenn Greenwald painstakingly and painfully documents.

I haven’t bothered scoring a 2011 candidates survey on executive power. I’m glad the NYT got responses from some of the candidates, but it seemed less interesting than four years ago, perhaps because only the Republican nomination is contested. My quick read: Paul’s answers seem acceptable, all others worship executive power. Huntsman’s answers seem a little more nuanced than the rest, but pointing in the same direction. Romney’s are in the middle of a very tight pack. In addition to evincing power worship, too many of Perry’s answers start with the exact same sentence, reinforcing the impression he’s not smart. Gingrich’s answers are the most brazen.

Other than envious destruction of power (the relevant definition and causes of which being tenuous, making effective action much harder) and gradual construction of alternatives, how can one be a democrat? I suspect more accurate information and more randomness are important — I’ll sometimes express this very specifically as enthusiasm for futarchy and sortition — but I’m also interested in whatever small increases in accurate information and randomness might be feasible, at every scale and granularity — global governance to small organizations, event probabilities to empirically validated practices.

Along the lines of the last, one of the few business books I’ve ever enjoyed is Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management, much of which cuts against leadership cult myths. Coincidentally, one of that book’s co-authors recently blogged about evidence that random selection of leaders can enhance group performance.

Collaborative-Futures.org

Thursday, August 26th, 2010

The 2nd edition of Collaborative Futures is now available, and the book has its own site and mailing list–there will be future editions, and you can help write them.

I did a series of posts (also see one on the Creative Commons blog) on the book sprint that produced the 1st edition. The 1st edition a highly successful experiment, but unpolished. The 2nd edition benefited from contributions by all of the 1st edition’s main collaborators, successfully incorporated new collaborators, and is far more polished. Also see I think the whole team is justifiably proud of the result. Please check it out and subject to harsh criticism, help with the next edition, or both.

You can also republish verbatim, translated, format-shifted, or modified versions, or incorporate into your own materials (e.g., for a class)–the book and all related assets are released under the Creative Commons Attribution-ShareAlike license — the same as Wikipedia. I don’t think we took advantage of this by incorporating any content from Wikipedia, but as I’m writing this it occurs to me that it would be fairly simple to create a supplement for the book mostly or even entirely consisting of a collection of relevant Wikipedia articles — see examples of such books created using PediaPress; another approach would be to add a feature to Booki (the software used to create Collaborative Futures) to facilitate importing chapters from Wikipedia.

Here’s a copy of my testimonial currently on the Booki site:

I was involved in the Collaborative Futures book sprint, the first book written using Booki, and the first FLOSS Manuals project that isn’t software documentation. I was amazed by the results materially and socially, and even more so by the just completed 2nd edition of Collaborative Futures, which successfully incorporated several new contributors and benefited from new Booki features.

I am inspired by the potential for book sprints and the Booki software to expand the scope of collaborative production in a wide variety of contexts, especially education. Booki is an exciting new innovative platform that is bringing book production online and is an important new form of free culture / free knowledge production. Platforms that expand the categories of works that can be radically improved through free collaboration (beyond software and encyclopedias) are absolutely essential to building a good future. I enthusiastically endorse Booki and encourage all to use and support it.

Collaborative Futures 5

Saturday, January 23rd, 2010

We finished the text of Collaborative Futures on the book sprint’s fifth day and I added yet another chapter intended for the “future” section. This one may be the oddest in the whole book. You have to remember that I have a bit of an appreciation of leftish verbiage in the service of free software and nearby, and seeing the opportunity to also bundle an against international apartheid rant … I ran with it. Copied below.

I’ll post more about the book’s contents, the sprint, and the Booki software later (but I can’t help noting now that I’m sad about not getting to a chapter on WikiNature). For now no new observations other than that Adam Hyde of FLOSS Manuals put together a really good group of people for the sprint. I enjoyed working with all of them tremendously and hope to do so again in some form. And thanks to Transmediale for hosting. And sad that I couldn’t stay in Berlin longer for Transmediale proper, in particular the Charlemagne Palestine concerts.

Check out Mushon Zer-Aviv’s great sprint finish writeup.

Solidarity

There is no guarantee that networked information technology will lead to the improvements in innovation, freedom, and justice that I suggest are possible. That is a choice we face as a society. The way we develop will, in significant measure, depend on choices we make in the next decade or so.

Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom

Postnationalism

Catherine Frost, in her 2006 paper Internet Galaxy Meets Postnational Constellation: Prospects for Political Solidarity After the Internet evaluates the prospects for the emergence of postnational solidarities abetted by Internet communications leading to a change in the political order in which the responsibilities of the nation state are joined by other entities. Frost does not enumerate the possible entities, but surely they include supernational, transnational, international, and global in scope and many different forms, not limited to the familiar democratic and corporate.

The verdict? Characteristics such as anonymity, agnosticism to human fatalities and questionable potential for democratic engagement make it improbable that postnational solidarities with political salience will emerge from the Internet — anytime soon. However, Frost acknowledges that we could be looking in the wrong places, such as the dominant English-language web. Marginalized groups could find the Internet a more compelling venue for creating new solidarities. And this:

Yet we know that when things change in a digital age, they change fast. The future for political solidarity is not a simple thing to discern, but it will undoubtedly be an outcome of the practices and experiences we are now developing.

Could the collaboration mechanisms discussed in this book aid the formation of politically salient postnational solidarities? Significant usurpation of responsibilities of the nation state seems unlikely soon. Yet this does not bar the formation of communities that contest with the nation state for intensity of loyalty, in particular when their own collaboration is threatened by a nation state. As an example we can see global responses from free software developers and bloggers to software patents and censorship in single jurisdictions.

If political solidarities could arise from the collaborative work and threats to it, then collaboration might alter the power relations of work. Both globally and between worker and employer — at least incrementally.

Free Labor

Trade in goods between jurisdictions has become less restricted over the last half century — tariff and non-tariff barriers to trade have been greatly reduced. Capital flows have greatly increased.

While travel costs have decreased drastically, in theory giving any worker the ability to work wherever pay (or other desirable quality) is highest, in fact workers are not permitted the freedom that has been given traders and capitalists. Workers in jurisdictions with less opportunity are as locked into politically institutionalized underemployment and poverty as were non-whites in Apartheid South Africa, while the populations of wealthy jurisdiction are as much privileged as whites in the same milieu.

What does this have to do with collaboration? This system of labor is immobilized by politically determined discrimination. It is not likely this system will change without the formation of new postnational orders. However, it is conceivable that as collaboration becomes more economically important — as an increasing share of wealth is created via distributed collaboration — the inequalities of the current sytem could be mitigated. And that is simply because distributed collaboration does not require physical movement across borders.

Workers in privileged jurisdictions will object — do object — to competition from those born into less privilege. As did white workers to competition from blacks during the consolidation of Apartheid. However, it is also possible that open collaboration could alter relationships between some workers and employers in the workers’ favor both in local and global markets.

Control of the means of production

Open collaboration changes which activities are more efficient inside or outside of a firm. Could the power of workers relative to firms also be altered?

Intellectual property rights prevent mobility of employees in so forth that their knowledge are locked in in a proprietary standard that is owned by the employer. This factor is all the more important since most of the tools that programmers are working with are available as cheap consumer goods (computers, etc.). The company holds no advantage over the worker in providing these facilities (in comparison to the blue-collar operator referred to above whose knowledge is bound to the Fordist machine park). When the source code is closed behind copyrights and patents, however, large sums of money is required to access the software tools. In this way, the owner/firm gains the edge back over the labourer/programmer.

This is were GPL comes in. The free license levels the playing field by ensuring that everyone has equal access to the source code. Or, putting it in Marxist-sounding terms, through free licenses the means of production are handed back to labour. […] By publishing software under free licences, the individual hacker is not merely improving his own reputation and employment prospects, as has been pointed out by Lerner and Tirole. He also contributes in establishing a labour market where the rules of the game are completely different, for him and for everyone else in his trade. It remains to be seen if this translates into better working conditions,higher salaries and other benefits associated with trade unions. At least theoretically the case is strong that this is the case. I got the idea from reading Glyn Moody’s study of the FOSS development model, where he states: “Because the ‘product’ is open source, and freely available, businesses must necessarily be based around a different kind of scarcity: the skills of the people who write and service that software.” (Moody, 2001, p.248) In other words, when the source code is made available to everyone under the GPL, the only thing that remains scarce is the skills needed to employ the software tools productively. Hence, the programmer gets an edge over the employer when they are bargaining over salary and working conditions.

It bears to be stressed that my reasoning needs to be substantiated with empirical data. Comparative research between employed free software programmers and those who work with proprietary software is required. Such a comparison must not focus exclusively on monetary aspects. As important is the subjective side of programming, for instance that hackers report that they are having more fun when participating in free software projects than they work with proprietary software (Lakhani & Wolf, 2005). Neither do I believe that this is the only explanation to why hackers use GPL. No less important are the concerns about civil liberties and the anti-authoritarian ethos within the hacker subculture. In sum, hackers are a much too heterogeneous bunch for them all to be included under a single explanation. But I dare to say that the labour perspective deserves more attention than it has been given by popular and scholarly critics of intellectual property till now. Both hackers and academic writers tend to formulate their critique against intellectual property law from a consumer rights horison and borrow arguments from a liberal, political tradition. There are, of course, noteworthy exceptions. People like Slavoj Zizek and Richard Barbrook have reacted against the liberal ideology implicit in much talk about the Internet by courting the revolutionary rhetoric of the Second International instead. Their ideas are original and eye-catching and often full of insight. Nevertheless, their rhetoric sounds oddly out of place when applied to pragmatic hackers. Perhaps advocates of free sotftware would do better to look for a counter-weight to liberalism in the reformist branch of the labour movement, i.e. in trade unionism. The ideals of free software is congruent with the vision laid down in the “Technology Bill of Rights”, written in 1981 by the International Association of Machinists:

”The new automation technologies and the sciences that underlie them are the product of a world-wide, centuries-long accumulation of knowledge. Accordingly, working people and their communities have a right to share in the decisions about, and the gains from, new technology” (Shaiken, 1986, p.272).

Johan Söderberg, Hackers GNUnited!, CC BY-SA, http://freebeer.fscons.org

Perhaps open collaboration can only be expected to slightly tip the balance of power between workers and employers and change measured wages and working conditions very little. However, it is conceivable, if fanciful, that control of the means of production could lead to a feeling of autonomy that empowers further action outside of the market.

Autonomous individuals and communities

Free Software and related methodologies can give individuals autonomy in their technology environments. It might also give individuals a measure of additional autonomy in the market (or increased ability to stand outside it). This is how Free and Open Source Software is almost always characterized, when it is described in terms of freedom or autonomy — giving individual users freedom, or allowing organizations to not be held ransom to proprietary licenses.

However, communities that exist outside of the market and state obtain a much greater autonomy. These communities have no need for the freedoms discussed above, even if individual community members do. There have always been such communities, but they did not possess the ability to use open collaboration to produce wealth that significantly competes, even supplants, market production. This ability makes these autonomous organizations newly salient.

Furthermore, these autonomous communities (Debian and Wikipedia are the most obvious examples) are pushing new frontiers of governance necessary to scale their collaborative production. Knowledge gained in this process could inform and inspire other communities that could become reinvigorated and more effective through the implementation of open collaboration, including community governance. Such communities could even produce postnational solidarities, especially when attacked.

Do we know how to get from here to there? No. But only through experimentation will we find out. If a more collaborative future is possible, obtaining it depends on the choices we make today.

Collaborative Futures 4

Friday, January 22nd, 2010

Day 4 of the Collaborative Futures book sprint and I added yet another chapter intended for the “future” section, current draft copied below. I’m probably least happy with it, but perhaps I’m just tired. I hope it gets a good edit, but today (day 5) is the final day and we have lots to wrap up!

(Boring permissions note: I’m blogging whole chapter drafts before anyone else touches them, so they’re in the public domain like everything else original here. The book is licensed under CC BY-SA and many of the chapters, particularly in the first half of the book, have had multiple authors pretty much from the start.)

Another observation about the core sprint group of 5 writers, 1 facilitator, and 1 developer: although the sprint is hosted in Berlin, there are no Germans. However, there are three people living in Berlin (from Ireland, Spain, and New Zealand), two living in New York (one from there, another from Israel), one living in and from Croatia, and me, from Illinois and living in California.

I hope to squeeze in a bit of writing about postnationalism and collaboration today — hat tip to Mushon Zer-Aviv. Also see his day 4 post, and Postnational.org, one of his projects.

Beyond Education

Education has a complicated history, including swings between decentralization, e.g., loose associations of students and teachers typifying some early European universities such as Oxford, to centralized control by the state or church. It’s easy to imagine that in some of these cases teachers had great freedom to collaborate with each other or that learning might be a collaboration among students and teacher, while in others, teachers would be told what to teach, and students would learn that, with little opportunity for collaboration.

Our current and unprecedented wealth has brought near universal literacy and enrollment in primary education in many societies and created impressive research universities and increasing enrollment in university and and graduate programs. This apparent success masks that we are in an age of centralized control, driven by standards politically determined at the level of large jurisdictions and a model in which teachers teach how to take tests and both students and teachers are consumers of educational materials created by large publishers. Current educational structures and practices do not take advantage of the possibilities offered by collaboration tools and methods and in some cases are in opposition to use of such tools.

Much as the disconnect between the technological ability to access and build upon and the political and economic reality of closed access in scientific publishing created the Open Access (OA) movement, the disconnect between what is possible and what is practiced in education has created collaborative responses.

Open Educational Resources

The Open Educational Resources (OER) movement encourages the availability of educational materials for free use and remixing — including textbooks and also any materials that facilitate learning. As in the case of OA, there is a strong push for materials to be published under liberal Creative Commons licenses and in formats amenable to reuse in order to maximize opportunities for latent collaboration, and in some cases to form the legal and technical basis for collaboration among large institutions.

OpenCourseWare (OCW) is the best known example of a large institutional collaboration in this space. Begun at MIT, over 200 universities and associated institutions have OCW programs, publishing course content and in many cases translating and reusing material from other OCW programs.

Connexions, hosted by Rice University, is an example of an OER platform facilitating large scale collaborative development and use of granular “course modules” which currently number over 15,000. The Connexions philosophy page is explicit about the role of collaboration in developing OER:

Connexions is an environment for collaboratively developing, freely sharing, and rapidly publishing scholarly content on the Web. Our Content Commons contains educational materials for everyone — from children to college students to professionals — organized in small modules that are easily connected into larger collections or courses. All content is free to use and reuse under the Creative Commons “attribution” license.

Content should be modular and non-linear
Most textbooks are a mass of information in linear format: one topic follows after another. However, our brains are not linear – we learn by making connections between new concepts and things we already know. Connexions mimics this by breaking down content into smaller chunks, called modules, that can be linked together and arranged in different ways. This lets students see the relationships both within and between topics and helps demonstrate that knowledge is naturally interconnected, not isolated into separate classes or books.
Sharing is good
Why re-invent the wheel? When people share their knowledge, they can select from the best ideas to create the most effective learning materials. The knowledge in Connexions can be shared and built upon by all because it is reusable:

  • technologically: we store content in XML, which ensures that it works on multiple computer platforms now and in the future.
  • legally: the Creative Commons open-content licenses make it easy for authors to share their work – allowing others to use and reuse it legally – while still getting recognition and attribution for their efforts.
  • educationally: we encourage authors to write each module to stand on its own so that others can easily use it in different courses and contexts. Connexions also allows instructors to customize content by overlaying their own set of links and annotations. Please take the Connexions Tour and see the many features in Connexions.
Collaboration is encouraged
Just as knowledge is interconnected, people don’t live in a vacuum. Connexions promotes communication between content creators and provides various means of collaboration. Collaboration helps knowledge grow more quickly, advancing the possibilities for new ideas from which we all benefit.

Connexions – Philosophy, CC BY, http://cnx.org/aboutus/

Beyond the institution

OER is not only used in an institutional context — it is especially a boon for self-learning. OCW materials are useful for self-learners, but OCW programs generally do not actively facilitate collaboration with self-learners. A platform like Connexions is more amenable to such collaboration, while wiki-based OER platforms have an even lower barrier to contribution that enable self-learners (and of course teachers and students in more traditional settings) to collaborate directly on the platform. Wiki-based OER platforms such as Wikiversity and WikiEducator make it even easier for learners and teachers in all settings to participate in the development and repurposing of educational materials.

Self-learning only goes so far. Why not apply the lessons of collaboration directly to the learning process, helping self-learners help each other? This is what a project called Peer 2 Peer University has set out to do:

The mission of P2PU is to leverage the power of the Internet and social software to enable communities of people to support learning for each other. P2PU combines open educational resources, structured courses, and recognition of knowledge/learning in order to offer high-quality low-cost education opportunities. It is run and governed by volunteers.

Scaling educational collaboration

As in the case of science, delivering the full impact of the possibilities of modern collaboration tools requires more than simply using the tools to create more resources. For the widest adoption, collaboratively created and curated materials must meet state-mandated standards and include accompanying assessment mechanisms.

While educational policy changes may be required, perhaps the best way for open education communities to convince policymakers to make these changes is to develop and adopt even more sophisticated collaboration tools, for example reputation systems for collaborators and quality metrics, collaborative filtering and other discovery mechanisms for educational materials. One example are “lenses” at Connexions (see http://cnx.org/lenses), which allow one to browse resources specifically endorsed by an organization or individual that one trusts.

Again, similar to science, clearing the external barriers to adoption of collaboration may result in general breakthroughs in collaboration tools and methods.

Collaborative Futures 3

Thursday, January 21st, 2010

Day 3 of the Collaborative Futures book sprint and we’re close to 20,000 words. I added another chapter intended for the “future” section, current draft copied below. It is very much a scattershot survey based on my paying partial attention for several years. There’s nothing remotely new apart from recording a favorite quote from my colleague John Wilbanks that doesn’t seem to have been written down before.

Continuing a tradition, another observation about the sprint group and its discussions: an obsession with attribution. A current drafts says attribution is “not only socially acceptable and morally correct, it is also intelligent.” People love talking about this and glomming on all kinds of other issues including participation and identity. I’m counter-obsessed (which Michael Mandiberg pointed out means I’m still obsessed).

Attribution is only interesting to me insofar as it is a side effect (and thus low cost) and adds non-moralistic value. In the ideal case, it is automated, as in the revision histories of wiki articles and version control systems. In the more common case, adding attribution information is a service to the reader — nevermind the author being attributed.

I’m also interested in attribution (and similar) metadata that can easily be copied with a work, making its use closer to automated — Creative Commons provides such metadata if a user choosing a license provides attribution information and CC license deeds use that metadata to provide copy&pastable attribution HTML, hopefully starting a beneficient cycle.

Admittedly I’ve also said many times that I think attribution, or rather requiring (or merely providing in the case of public domain content) attribution by link specifically, is an undersold term of the Creative Commons licenses — links are the currency of the web, and this is an easy way to say “please use my work and link to me!”

Mushon Zer-Aviv continues his tradition for day 3 of a funny and observant post, but note that he conflates attribution and licensing, perhaps to make a point:

The people in the room have quite strong feelings about concepts of attribution. What is pretty obvious by now is that both those who elevate the importance of proper crediting to the success of collaboration and those who dismiss it all together are both quite equally obsessed about it. The attribution we chose for the book is CC-BY-SA oh and maybe GPL too… Not sure… Actually, I guess I am not the most attribution obsessed guy in the room.

Science 2.0

Science is a prototypical example of collaboration, from closely coupled collaboration within a lab to the very loosely coupled collaboration of the grant scientific enterprise over centuries. However, science has been slow to adopt modern tools and methods for collaboration. Efforts to adopt or translate new tools and methods have been broadly (and loosely) characterized as “Science 2.0” and “Open Science”, very roughly corresponding to “Web 2.0” and “Open Source”.

Open Access (OA) publishing is an effort to remove a major barrier to distributed collaboration in science — the high price of journal articles, effectively limiting access to researchers affiliated with wealthy institutions. Access to Knowledge (A2K) emphasizes the equality and social justice aspects of opening access to the scientific literature.

The OA movement has met with substantial and increasing success recently. The Directory of Open Access Journals (see http://www.doaj.org) lists 4583 journals as of 2010-01-20. The Public Library of Science’s top journals are in the first tier of publications in their fields. Traditional publishers are investing in OA, such as Springer’s acquisition of large OA publisher BioMed Central, or experimenting with OA, for example Nature Precedings.

In the longer term OA may lead to improving the methods of scientific collaboration, eg peer review, and allowing new forms of meta-collaboration. An early example of the former is PLoS ONE, a rethinking of the journal as an electronic publication without a limitation on the number of articles published and with the addition of user rating and commenting. An example of the latter would be machine analysis and indexing of journal articles, potentially allowing all scientific literature to be treated as a database, and therefore queryable — at least all OA literature. These more sophisticated applications of OA often require not just access, but permission to redistribute and manipulate, thus a rapid movement to publication under a Creative Commons license that permits any use with attribution — a practice followed by both PLoS and BioMed Central.

Scientists have also adopted web tools to enhance collaboration within a working group as well as to facilitate distributed collaboration. Wikis and blogs have been purposed as as open lab notebooks under the rubric of “Open Notebook Science”. Connotea is a tagging platform (they call it “reference management”) for scientists. These tools help “scale up” and direct the scientific conversation, as explained by Michael Nielsen:

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?

Stepping back, what tools like blogs, open notebooks and their descendants enable is filtered access to new sources of information, and to new conversation. The net result is a restructuring of expert attention. This is important because expert attention is the ultimate scarce resource in scientific research, and the more efficiently it can be allocated, the faster science can progress.

Michael Nielsen, “Doing science online”, http://michaelnielsen.org/blog/doing-science-online/

OA and adoption of web tools are only the first steps toward utilizing digital networks for scientific collaboration. Science is increasingly computational and data-intensive: access to a completed journal article may not contribute much to allowing other researcher’s to build upon one’s work — that requires publication of all code and data used during the research used to produce the paper. Publishing the entire “resarch compendium” under apprpriate terms (eg usually public domain for data, a free software license for software, and a liberal Creative Commons license for articles and other content) and in open formats has recently been called “reproducible research” — in computational fields, the publication of such a compendium gives other researches all of the tools they need to build upon one’s work.

Standards are also very important for enabling scientific collaboration, and not just coarse standards like RSS. The Semantic Web and in particular ontologies have sometimes been ridiculed by consumer web developers, but they are necessary for science. How can one treat the world’s scientific literature as a database if it isn’t possible to identify, for example, a specific chemical or gene, and agree on a name for the chemical or gene in question that different programs can use interoperably? The biological sciences have taken a lead in implementation of semantic technologies, from ontology development and semantic databsases to inline web page annotation using RDFa.

Of course all of science, even most of science, isn’t digital. Collaboration may require sharing of physical materials. But just as online stores make shopping easier, digital tools can make sharing of scientific materials easier. One example is the development of standardized Materials Transfer Agreements accompanied by web-based applications and metadata, potentially a vast improvement over the current choice between ad hoc sharing and highly bureaucratized distribution channels.

Somewhere between open science and business (both as in for-profit business and business as usual) is “Open Innovation” which refers to a collection of tools and methods for enabling more collaboration, for example crowdsourcing of research expertise (a company called InnoCentive is a leader here), patent pools, end-user innovation (documented especially by Erik von Hippel in Democratizing Innovation), and wisdom of the crowds methods such as prediction markets.

Reputation is an important question for many forms of collaboration, but particularly in science, where careers are determined primarily by one narrow metric of reputation — publication. If the above phenomena are to reach their full potential, they will have to be aligned with scientific career incentives. This means new reputation systems that take into account, for example, re-use of published data and code, and the impact of granular online contributions, must be developed and adopted.

From the grand scientific enterprise to business enterprise modern collaboration tools hold great promise for increasing the rate of discovery, which sounds prosaic, but may be our best tool for solving our most vexing problems. John Wilbanks, Vice President for Science at Creative Commons often makes the point like this: “We don’t have any idea how to solve cancer, so all we can do is increase the rate of discovery so as to increase the probability we’ll make a breakthrough.”

Science 2.0 also holds great promise for allowing the public to access current science, and even in some cases collaborate with professional researchers. The effort to apply modern collaboration tools to science may even increase the rate of discovery of innovations in collaboration!

Collaborative Futures 2

Wednesday, January 20th, 2010

Day 2 of the Collaborative Futures book sprint saw the writing of a number of chapters and the creation of a much more fleshed out table of contents. I spent too much time interrupted by other work and threading together a chapter (feels more like a long blog post) on “Other People’s Computers” from old sources and the theme of supporting collaboration. The current draft is pasted below because that’s easier than extracting links to sources.

Another tangential observation about the group: I noted a fair amount of hostility toward Wikipedia, the Wikimedia Foundation, and Mediawiki on the notion that they have effectively sucked the air out of other potential projects and models of collaboration, even other wiki software. Of course I am a huge fan of Wikipedia — I think its centralization has allowed it to scale in a way not possible otherwise — it has made the community-centric collaboration pie bigger — and we are very fortunate that such a dominant service has gotten so much right, at least from a freedom perspective. However, the underlying criticism is not without merit, and I tried to incorporate a productive and very brief version of it into the draft.

Also see Mushon Zer-Aviv’s entertaining post on day 2.

Other People’s Computers

Partly because they’re location-transparent and web-integrated, browser apps support social interaction more easily than desktop apps.

Kragen Sitaker, “What’s wrong with HTTP”, http://lists.canonical.org/pipermail/kragen-tol/2006-November/000841.html

Much of what we call collaboration occurs on web sites (more generally, software services), particularly collaboration among many distributed users. Direct support for collaboration, and more broadly for social features, is simply easier in a centralized context. It is possible to imagine a decentralized Wikipedia or Facebook, but building such services with sufficient ease of use, features, and robustness to challenge centralized web sites is a very difficult challenge.

Why does this matter? The web is great for collaboration, let’s celebrate that! However, making it relatively easy for people to work together in the specific way offered by a web site owner is a rather impoverished vision of what the web (or more generally, digital networks) could enable, just as merely allowing people to run programs on their computers in the way program authors intended is an impoverished vision of personal computing.

Free software allows users control their own computing and to help other users by retaining the ability to run, modify, and share software for any purpose. Whether the value of this autonomy is primarily ethical, as often framed by advocates of the term free software, or primarily practical, as often framed by advocates of the term open source, any threat to these freedoms has to be of deep concern to anyone interested in the future of collaboration, both in terms what collaborations are possible and what interests control and benefit from those collaborations.

Web sites and special-purpose hardware […] do not give me the same freedoms general-purpose computers do. If the trend were to continue to the extent the pundits project, more and more of what I do today with my computer will be done by special-purpose things and remote servers.

What does freedom of software mean in such an environment? Surely it’s not wrong to run a Web site without offering my software and databases for download. (Even if it were, it might not be feasible for most people to download them. IBM’s patent server has a many-terabyte database behind it.)

I believe that software — open-source software, in particular — has the potential to give individuals significantly more control over their own lives, because it consists of ideas, not people, places, or things. The trend toward special-purpose devices and remote servers could reverse that.

Kragen Sitaker, “people, places, things, and ideas “, http://lists.canonical.org/pipermail/kragen-tol/1999-January/000322.html

What are the prospects and strategies for keeping the benefits of free software in an age of collaboration mediated by software services? One strategy, argued for in “The equivalent of free software for online services” by Kragen Sitaker (see http://lists.canonical.org/pipermail/kragen-tol/2006-July/000818.html), is that centralized services need to be re-implemented as peer-to-peer services that can be run as free software on computers under users’ control. This is an extremely interesting strategy, but a very long term one, for it is hard, being at least both a computer science and a social challenge.

Abstinence from software services may be a naive and losing strategy in both the short and long term. Instead, we can both work on decentralization as well as attempt to build services that respect user’s autonomy:

Going places I don’t individually control — restaurants, museums, retail stores, public parks — enriches my life immeasurably. A definition of “freedom” where I couldn’t leave my own house because it was the only space I had absolute control over would not feel very free to me at all. At the same time, I think there are some places I just don’t want to go — my freedom and physical well-being wouldn’t be protected or respected there.

Similarly, I think that using network services makes my computing life fuller and more satisfying. I can do more things and be a more effective person by spring-boarding off the software on other peoples’ computers than just with my own. I may not control your email server, but I enjoy sending you email, and I think it makes both of our lives better.

And I think that just as we can define a level of personal autonomy that we expect in places that belong to other people or groups, we should be able to define a level of autonomy that we can expect when using software on other people’s computers. Can we make working on network services more like visiting a friends’ house than like being locked in a jail?

We’ve made a balance between the absolute don’t-use-other-people’s-computers argument and the maybe-it’s-OK-sometimes argument in the Franklin Street Statement. Time will tell whether we can craft a culture around Free Network Services that is respectful of users’ autonomy, such that we can use other computers with some measure of confidence.

Evan Prodromou, “RMS on Cloud Computing: “Stupidity””, CC BY-SA, http://autonomo.us/2008/09/rms-on-cloud-computing-stupidity/

The Franklin Street Statement on Freedom and Network Services is a beginning group attempt to distill actions users, service providers (the “other people” here), and developers should take to retain the benefits of free software in an era of software services:

The current generation of network services or Software as a Service can provide advantages over traditional, locally installed software in ease of deployment, collaboration, and data aggregation. Many users have begun to rely on such services in preference to software provisioned by themselves or their organizations. This move toward centralization has powerful effects on software freedom and user autonomy.

On March 16, 2008, a workgroup convened at the Free Software Foundation to discuss issues of freedom for users given the rise of network services. We considered a number of issues, among them what impacts these services have on user freedom, and how implementers of network services can help or harm users. We believe this will be an ongoing conversation, potentially spanning many years. Our hope is that free software and open source communities will embrace and adopt these values when thinking about user freedom and network services. We hope to work with organizations including the FSF to provide moral and technical leadership on this issue.

We consider network services that are Free Software and which share Free Data as a good starting-point for ensuring users’ freedom. Although we have not yet formally defined what might constitute a ‘Free Service’, we do have suggestions that developers, service providers, and users should consider:

Developers of network service software are encouraged to:

  • Use the GNU Affero GPL, a license designed specifically for network service software, to ensure that users of services have the ability to examine the source or implement their own service.
  • Develop freely-licensed alternatives to existing popular but non-Free network services.
  • Develop software that can replace centralized services and data storage with distributed software and data deployment, giving control back to users.

Service providers are encouraged to:

  • Choose Free Software for their service.
  • Release customizations to their software under a Free Software license.
  • Make data and works of authorship available to their service’s users under legal terms and in formats that enable the users to move and use their data outside of the service. This means:
    • Users should control their private data.
    • Data available to all users of the service should be available under terms approved for Free Cultural Works or Open Knowledge.

Users are encouraged to:

  • Consider carefully whether to use software on someone else’s computer at all. Where it is possible, they should use Free Software equivalents that run on their own computer. Services may have substantial benefits, but they represent a loss of control for users and introduce several problems of freedom.
  • When deciding whether to use a network service, look for services that follow the guidelines listed above, so that, when necessary, they still have the freedom to modify or replicate the service without losing their own data.

Franklin Street Statement on Freedom and Network Services, CC BY-SA, http://autonomo.us/2008/07/franklin-street-statement/

As challenging as the Franklin Street Statement appears, additional issues must be addressed for maximum autonomy, including portable identifiers:

A Free Software Definition for the next decade should focus on the user’s overall autonomy- their ability not just to use and modify a particular piece of software, but their ability to bring their data and identity with them to new, modified software.

Such a definition would need to contain something like the following minimal principles:

  1. data should be available to the users who created it without legal restrictions or technological difficulty.
  2. any data tied to a particular user should be available to that user without technological difficulty, and available for redistribution under legal terms no more restrictive than the original terms.
  3. source code which can meaningfully manipulate the data provided under 1 and 2 should be freely available.
  4. if the service provider intends to cease providing data in a manner compliant with the first three terms, they should notify the user of this intent and provide a mechanism for users to obtain the data.
  5. a user’s identity should be transparent; that is, where the software exposes a user’s identity to other users, the software should allow forwarding to new or replacement identities hosted by other software.

Luis Villia, “Voting With Your Feet and Other Freedoms”, CC BY-SA, http://tieguy.org/blog/2007/12/06/voting-with-your-feet-and-other-freedoms/

Fortunately the oldest and at least until recently most ubiqitous network service — email — accomodates portable identifiers. (Not to mention that email is the lowest common denominator for much collaboration — sending attachments back and forth.) Users of a centralized email service like Gmail can retain a great deal of autonomy if they use an email address at a domain they control and merely route delivery to the service — though of course most users use the centralized provier’s domain.

It is worth noting that the more recent and widely used if not ubiquitous instant messaging protocol XMPP as well as the brand new and little used Wave protocol are architected similar to email, though use of non-provider domains seems even less common, and in the case of Wave, Google is currently the only service provider.

It may be valuable to assess software services from the respect of community autonomy as well as user autonomy. The former may explicitly note  requirements for the product of collaboration — non-private data, roughly — as well as service governance:

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Mike Linksvayer, “Constitutionally open services”, CC0, https://gondwanaland.com/mlog/2006/07/06/constitutionally-open-services/

Software services are rapidly developing and subject to much hype — referred to by buzzwords such as cloud computing. However, some of the most potent means of encouraing autonomy may be relatively boring — for example, making it easier to maintain one’s own computer and deploy slightly customized software in a secure and foolproof fashion. Any such development helps traditional users of free software as well as makes doing computing on one’s own computer (which may be a “personal server” or virtual machine that one controls) more attractive.

Perhaps one of the most hopeful trends is relatively widespead deployment by end users of free software web applications like WordPress and MediaWiki. StatusNet, free software for microblogging, is attempting to replicate this adoption success, but also includes technical support for a form of decentralization (remote subscription) and a legal requirement for service providers to release modifications as free software via the AGPL.

This section barely scratches the surface of the technical and social issues raised by the convergence of so much of our computing, in particular computing that facilitates collaboration, to servers controlled by “other people”, in particular a few large service providers. The challenges of creating autonomy-respecting alternatives should not be understated.

One of those challenges is only indirectly technical: decentralization can make community formation more difficult. To the extent the collaboration we are interested in requires community, this is a challenge. However, easily formed but inauthentic and controlled community also will not produce the kind of collaboration we are interested in.

We should not limit our imagination to the collaboration faciliated by the likes of Facebook, Flickr, Google Docs, Twitter, or other “Web 2.0” services. These are impressive, but then so was AOL two decades ago. We should not accept a future of collaboration mediated by centralized giants now, any more than we should have been, with hindsight, happy to accept information services dominated by AOL and its near peers. 

Wikipedia is both held up as an exemplar of collaboration and is a free-as-in-freedom service: both the code and the content of the service are accessible under free terms. It is also a huge example of community governance in many respects. And it is undeniably a category-exploding success: vastly bigger and useful in many more ways than any previous encyclopedia. Other software and services enabling autonomous collaboration should set their sites no lower — not to merely replace an old category, but to explode it.

However, Wikipedia (and its MediaWiki software) are not the end of the story. Merely using MediaWiki for a new project, while appropriate in many cases, is not magic pixie dust for enabling collaboration. Affordances for collaboration need to be built into many different types of software and services. Following Wikipedia’s lead in autonomy is a good idea, but many experiments should be encouraged in every other respect. One example could be the young and relatively domain-specific collaboration software that this book is being written with, Booki.

Software services have made “installation” of new software as simple as visiting a web page, social features a click, and provide an easy ladder of adoption for mass collaboration. They also threaten autonomy at the individual and community level. While there are daunting challenges, meeting them means achieving “world domination” for freedom in the most important means of production — computer-mediated collaboration — something the free software movement failed to approach in the era of desktop office software.