Archive for January, 2010

Collaborative Futures 5

Saturday, January 23rd, 2010

We finished the text of Collaborative Futures on the book sprint’s fifth day and I added yet another chapter intended for the “future” section. This one may be the oddest in the whole book. You have to remember that I have a bit of an appreciation of leftish verbiage in the service of free software and nearby, and seeing the opportunity to also bundle an against international apartheid rant … I ran with it. Copied below.

I’ll post more about the book’s contents, the sprint, and the Booki software later (but I can’t help noting now that I’m sad about not getting to a chapter on WikiNature). For now no new observations other than that Adam Hyde of FLOSS Manuals put together a really good group of people for the sprint. I enjoyed working with all of them tremendously and hope to do so again in some form. And thanks to Transmediale for hosting. And sad that I couldn’t stay in Berlin longer for Transmediale proper, in particular the Charlemagne Palestine concerts.

Check out Mushon Zer-Aviv’s great sprint finish writeup.

Solidarity

There is no guarantee that networked information technology will lead to the improvements in innovation, freedom, and justice that I suggest are possible. That is a choice we face as a society. The way we develop will, in significant measure, depend on choices we make in the next decade or so.

Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom

Postnationalism

Catherine Frost, in her 2006 paper Internet Galaxy Meets Postnational Constellation: Prospects for Political Solidarity After the Internet evaluates the prospects for the emergence of postnational solidarities abetted by Internet communications leading to a change in the political order in which the responsibilities of the nation state are joined by other entities. Frost does not enumerate the possible entities, but surely they include supernational, transnational, international, and global in scope and many different forms, not limited to the familiar democratic and corporate.

The verdict? Characteristics such as anonymity, agnosticism to human fatalities and questionable potential for democratic engagement make it improbable that postnational solidarities with political salience will emerge from the Internet — anytime soon. However, Frost acknowledges that we could be looking in the wrong places, such as the dominant English-language web. Marginalized groups could find the Internet a more compelling venue for creating new solidarities. And this:

Yet we know that when things change in a digital age, they change fast. The future for political solidarity is not a simple thing to discern, but it will undoubtedly be an outcome of the practices and experiences we are now developing.

Could the collaboration mechanisms discussed in this book aid the formation of politically salient postnational solidarities? Significant usurpation of responsibilities of the nation state seems unlikely soon. Yet this does not bar the formation of communities that contest with the nation state for intensity of loyalty, in particular when their own collaboration is threatened by a nation state. As an example we can see global responses from free software developers and bloggers to software patents and censorship in single jurisdictions.

If political solidarities could arise from the collaborative work and threats to it, then collaboration might alter the power relations of work. Both globally and between worker and employer — at least incrementally.

Free Labor

Trade in goods between jurisdictions has become less restricted over the last half century — tariff and non-tariff barriers to trade have been greatly reduced. Capital flows have greatly increased.

While travel costs have decreased drastically, in theory giving any worker the ability to work wherever pay (or other desirable quality) is highest, in fact workers are not permitted the freedom that has been given traders and capitalists. Workers in jurisdictions with less opportunity are as locked into politically institutionalized underemployment and poverty as were non-whites in Apartheid South Africa, while the populations of wealthy jurisdiction are as much privileged as whites in the same milieu.

What does this have to do with collaboration? This system of labor is immobilized by politically determined discrimination. It is not likely this system will change without the formation of new postnational orders. However, it is conceivable that as collaboration becomes more economically important — as an increasing share of wealth is created via distributed collaboration — the inequalities of the current sytem could be mitigated. And that is simply because distributed collaboration does not require physical movement across borders.

Workers in privileged jurisdictions will object — do object — to competition from those born into less privilege. As did white workers to competition from blacks during the consolidation of Apartheid. However, it is also possible that open collaboration could alter relationships between some workers and employers in the workers’ favor both in local and global markets.

Control of the means of production

Open collaboration changes which activities are more efficient inside or outside of a firm. Could the power of workers relative to firms also be altered?

Intellectual property rights prevent mobility of employees in so forth that their knowledge are locked in in a proprietary standard that is owned by the employer. This factor is all the more important since most of the tools that programmers are working with are available as cheap consumer goods (computers, etc.). The company holds no advantage over the worker in providing these facilities (in comparison to the blue-collar operator referred to above whose knowledge is bound to the Fordist machine park). When the source code is closed behind copyrights and patents, however, large sums of money is required to access the software tools. In this way, the owner/firm gains the edge back over the labourer/programmer.

This is were GPL comes in. The free license levels the playing field by ensuring that everyone has equal access to the source code. Or, putting it in Marxist-sounding terms, through free licenses the means of production are handed back to labour. […] By publishing software under free licences, the individual hacker is not merely improving his own reputation and employment prospects, as has been pointed out by Lerner and Tirole. He also contributes in establishing a labour market where the rules of the game are completely different, for him and for everyone else in his trade. It remains to be seen if this translates into better working conditions,higher salaries and other benefits associated with trade unions. At least theoretically the case is strong that this is the case. I got the idea from reading Glyn Moody’s study of the FOSS development model, where he states: “Because the ‘product’ is open source, and freely available, businesses must necessarily be based around a different kind of scarcity: the skills of the people who write and service that software.” (Moody, 2001, p.248) In other words, when the source code is made available to everyone under the GPL, the only thing that remains scarce is the skills needed to employ the software tools productively. Hence, the programmer gets an edge over the employer when they are bargaining over salary and working conditions.

It bears to be stressed that my reasoning needs to be substantiated with empirical data. Comparative research between employed free software programmers and those who work with proprietary software is required. Such a comparison must not focus exclusively on monetary aspects. As important is the subjective side of programming, for instance that hackers report that they are having more fun when participating in free software projects than they work with proprietary software (Lakhani & Wolf, 2005). Neither do I believe that this is the only explanation to why hackers use GPL. No less important are the concerns about civil liberties and the anti-authoritarian ethos within the hacker subculture. In sum, hackers are a much too heterogeneous bunch for them all to be included under a single explanation. But I dare to say that the labour perspective deserves more attention than it has been given by popular and scholarly critics of intellectual property till now. Both hackers and academic writers tend to formulate their critique against intellectual property law from a consumer rights horison and borrow arguments from a liberal, political tradition. There are, of course, noteworthy exceptions. People like Slavoj Zizek and Richard Barbrook have reacted against the liberal ideology implicit in much talk about the Internet by courting the revolutionary rhetoric of the Second International instead. Their ideas are original and eye-catching and often full of insight. Nevertheless, their rhetoric sounds oddly out of place when applied to pragmatic hackers. Perhaps advocates of free sotftware would do better to look for a counter-weight to liberalism in the reformist branch of the labour movement, i.e. in trade unionism. The ideals of free software is congruent with the vision laid down in the “Technology Bill of Rights”, written in 1981 by the International Association of Machinists:

”The new automation technologies and the sciences that underlie them are the product of a world-wide, centuries-long accumulation of knowledge. Accordingly, working people and their communities have a right to share in the decisions about, and the gains from, new technology” (Shaiken, 1986, p.272).

Johan Söderberg, Hackers GNUnited!, CC BY-SA, http://freebeer.fscons.org

Perhaps open collaboration can only be expected to slightly tip the balance of power between workers and employers and change measured wages and working conditions very little. However, it is conceivable, if fanciful, that control of the means of production could lead to a feeling of autonomy that empowers further action outside of the market.

Autonomous individuals and communities

Free Software and related methodologies can give individuals autonomy in their technology environments. It might also give individuals a measure of additional autonomy in the market (or increased ability to stand outside it). This is how Free and Open Source Software is almost always characterized, when it is described in terms of freedom or autonomy — giving individual users freedom, or allowing organizations to not be held ransom to proprietary licenses.

However, communities that exist outside of the market and state obtain a much greater autonomy. These communities have no need for the freedoms discussed above, even if individual community members do. There have always been such communities, but they did not possess the ability to use open collaboration to produce wealth that significantly competes, even supplants, market production. This ability makes these autonomous organizations newly salient.

Furthermore, these autonomous communities (Debian and Wikipedia are the most obvious examples) are pushing new frontiers of governance necessary to scale their collaborative production. Knowledge gained in this process could inform and inspire other communities that could become reinvigorated and more effective through the implementation of open collaboration, including community governance. Such communities could even produce postnational solidarities, especially when attacked.

Do we know how to get from here to there? No. But only through experimentation will we find out. If a more collaborative future is possible, obtaining it depends on the choices we make today.

Collaborative Futures 4

Friday, January 22nd, 2010

Day 4 of the Collaborative Futures book sprint and I added yet another chapter intended for the “future” section, current draft copied below. I’m probably least happy with it, but perhaps I’m just tired. I hope it gets a good edit, but today (day 5) is the final day and we have lots to wrap up!

(Boring permissions note: I’m blogging whole chapter drafts before anyone else touches them, so they’re in the public domain like everything else original here. The book is licensed under CC BY-SA and many of the chapters, particularly in the first half of the book, have had multiple authors pretty much from the start.)

Another observation about the core sprint group of 5 writers, 1 facilitator, and 1 developer: although the sprint is hosted in Berlin, there are no Germans. However, there are three people living in Berlin (from Ireland, Spain, and New Zealand), two living in New York (one from there, another from Israel), one living in and from Croatia, and me, from Illinois and living in California.

I hope to squeeze in a bit of writing about postnationalism and collaboration today — hat tip to Mushon Zer-Aviv. Also see his day 4 post, and Postnational.org, one of his projects.

Beyond Education

Education has a complicated history, including swings between decentralization, e.g., loose associations of students and teachers typifying some early European universities such as Oxford, to centralized control by the state or church. It’s easy to imagine that in some of these cases teachers had great freedom to collaborate with each other or that learning might be a collaboration among students and teacher, while in others, teachers would be told what to teach, and students would learn that, with little opportunity for collaboration.

Our current and unprecedented wealth has brought near universal literacy and enrollment in primary education in many societies and created impressive research universities and increasing enrollment in university and and graduate programs. This apparent success masks that we are in an age of centralized control, driven by standards politically determined at the level of large jurisdictions and a model in which teachers teach how to take tests and both students and teachers are consumers of educational materials created by large publishers. Current educational structures and practices do not take advantage of the possibilities offered by collaboration tools and methods and in some cases are in opposition to use of such tools.

Much as the disconnect between the technological ability to access and build upon and the political and economic reality of closed access in scientific publishing created the Open Access (OA) movement, the disconnect between what is possible and what is practiced in education has created collaborative responses.

Open Educational Resources

The Open Educational Resources (OER) movement encourages the availability of educational materials for free use and remixing — including textbooks and also any materials that facilitate learning. As in the case of OA, there is a strong push for materials to be published under liberal Creative Commons licenses and in formats amenable to reuse in order to maximize opportunities for latent collaboration, and in some cases to form the legal and technical basis for collaboration among large institutions.

OpenCourseWare (OCW) is the best known example of a large institutional collaboration in this space. Begun at MIT, over 200 universities and associated institutions have OCW programs, publishing course content and in many cases translating and reusing material from other OCW programs.

Connexions, hosted by Rice University, is an example of an OER platform facilitating large scale collaborative development and use of granular “course modules” which currently number over 15,000. The Connexions philosophy page is explicit about the role of collaboration in developing OER:

Connexions is an environment for collaboratively developing, freely sharing, and rapidly publishing scholarly content on the Web. Our Content Commons contains educational materials for everyone — from children to college students to professionals — organized in small modules that are easily connected into larger collections or courses. All content is free to use and reuse under the Creative Commons “attribution” license.

Content should be modular and non-linear
Most textbooks are a mass of information in linear format: one topic follows after another. However, our brains are not linear – we learn by making connections between new concepts and things we already know. Connexions mimics this by breaking down content into smaller chunks, called modules, that can be linked together and arranged in different ways. This lets students see the relationships both within and between topics and helps demonstrate that knowledge is naturally interconnected, not isolated into separate classes or books.
Sharing is good
Why re-invent the wheel? When people share their knowledge, they can select from the best ideas to create the most effective learning materials. The knowledge in Connexions can be shared and built upon by all because it is reusable:

  • technologically: we store content in XML, which ensures that it works on multiple computer platforms now and in the future.
  • legally: the Creative Commons open-content licenses make it easy for authors to share their work – allowing others to use and reuse it legally – while still getting recognition and attribution for their efforts.
  • educationally: we encourage authors to write each module to stand on its own so that others can easily use it in different courses and contexts. Connexions also allows instructors to customize content by overlaying their own set of links and annotations. Please take the Connexions Tour and see the many features in Connexions.
Collaboration is encouraged
Just as knowledge is interconnected, people don’t live in a vacuum. Connexions promotes communication between content creators and provides various means of collaboration. Collaboration helps knowledge grow more quickly, advancing the possibilities for new ideas from which we all benefit.

Connexions – Philosophy, CC BY, http://cnx.org/aboutus/

Beyond the institution

OER is not only used in an institutional context — it is especially a boon for self-learning. OCW materials are useful for self-learners, but OCW programs generally do not actively facilitate collaboration with self-learners. A platform like Connexions is more amenable to such collaboration, while wiki-based OER platforms have an even lower barrier to contribution that enable self-learners (and of course teachers and students in more traditional settings) to collaborate directly on the platform. Wiki-based OER platforms such as Wikiversity and WikiEducator make it even easier for learners and teachers in all settings to participate in the development and repurposing of educational materials.

Self-learning only goes so far. Why not apply the lessons of collaboration directly to the learning process, helping self-learners help each other? This is what a project called Peer 2 Peer University has set out to do:

The mission of P2PU is to leverage the power of the Internet and social software to enable communities of people to support learning for each other. P2PU combines open educational resources, structured courses, and recognition of knowledge/learning in order to offer high-quality low-cost education opportunities. It is run and governed by volunteers.

Scaling educational collaboration

As in the case of science, delivering the full impact of the possibilities of modern collaboration tools requires more than simply using the tools to create more resources. For the widest adoption, collaboratively created and curated materials must meet state-mandated standards and include accompanying assessment mechanisms.

While educational policy changes may be required, perhaps the best way for open education communities to convince policymakers to make these changes is to develop and adopt even more sophisticated collaboration tools, for example reputation systems for collaborators and quality metrics, collaborative filtering and other discovery mechanisms for educational materials. One example are “lenses” at Connexions (see http://cnx.org/lenses), which allow one to browse resources specifically endorsed by an organization or individual that one trusts.

Again, similar to science, clearing the external barriers to adoption of collaboration may result in general breakthroughs in collaboration tools and methods.

Collaborative Futures 3

Thursday, January 21st, 2010

Day 3 of the Collaborative Futures book sprint and we’re close to 20,000 words. I added another chapter intended for the “future” section, current draft copied below. It is very much a scattershot survey based on my paying partial attention for several years. There’s nothing remotely new apart from recording a favorite quote from my colleague John Wilbanks that doesn’t seem to have been written down before.

Continuing a tradition, another observation about the sprint group and its discussions: an obsession with attribution. A current drafts says attribution is “not only socially acceptable and morally correct, it is also intelligent.” People love talking about this and glomming on all kinds of other issues including participation and identity. I’m counter-obsessed (which Michael Mandiberg pointed out means I’m still obsessed).

Attribution is only interesting to me insofar as it is a side effect (and thus low cost) and adds non-moralistic value. In the ideal case, it is automated, as in the revision histories of wiki articles and version control systems. In the more common case, adding attribution information is a service to the reader — nevermind the author being attributed.

I’m also interested in attribution (and similar) metadata that can easily be copied with a work, making its use closer to automated — Creative Commons provides such metadata if a user choosing a license provides attribution information and CC license deeds use that metadata to provide copy&pastable attribution HTML, hopefully starting a beneficient cycle.

Admittedly I’ve also said many times that I think attribution, or rather requiring (or merely providing in the case of public domain content) attribution by link specifically, is an undersold term of the Creative Commons licenses — links are the currency of the web, and this is an easy way to say “please use my work and link to me!”

Mushon Zer-Aviv continues his tradition for day 3 of a funny and observant post, but note that he conflates attribution and licensing, perhaps to make a point:

The people in the room have quite strong feelings about concepts of attribution. What is pretty obvious by now is that both those who elevate the importance of proper crediting to the success of collaboration and those who dismiss it all together are both quite equally obsessed about it. The attribution we chose for the book is CC-BY-SA oh and maybe GPL too… Not sure… Actually, I guess I am not the most attribution obsessed guy in the room.

Science 2.0

Science is a prototypical example of collaboration, from closely coupled collaboration within a lab to the very loosely coupled collaboration of the grant scientific enterprise over centuries. However, science has been slow to adopt modern tools and methods for collaboration. Efforts to adopt or translate new tools and methods have been broadly (and loosely) characterized as “Science 2.0” and “Open Science”, very roughly corresponding to “Web 2.0” and “Open Source”.

Open Access (OA) publishing is an effort to remove a major barrier to distributed collaboration in science — the high price of journal articles, effectively limiting access to researchers affiliated with wealthy institutions. Access to Knowledge (A2K) emphasizes the equality and social justice aspects of opening access to the scientific literature.

The OA movement has met with substantial and increasing success recently. The Directory of Open Access Journals (see http://www.doaj.org) lists 4583 journals as of 2010-01-20. The Public Library of Science’s top journals are in the first tier of publications in their fields. Traditional publishers are investing in OA, such as Springer’s acquisition of large OA publisher BioMed Central, or experimenting with OA, for example Nature Precedings.

In the longer term OA may lead to improving the methods of scientific collaboration, eg peer review, and allowing new forms of meta-collaboration. An early example of the former is PLoS ONE, a rethinking of the journal as an electronic publication without a limitation on the number of articles published and with the addition of user rating and commenting. An example of the latter would be machine analysis and indexing of journal articles, potentially allowing all scientific literature to be treated as a database, and therefore queryable — at least all OA literature. These more sophisticated applications of OA often require not just access, but permission to redistribute and manipulate, thus a rapid movement to publication under a Creative Commons license that permits any use with attribution — a practice followed by both PLoS and BioMed Central.

Scientists have also adopted web tools to enhance collaboration within a working group as well as to facilitate distributed collaboration. Wikis and blogs have been purposed as as open lab notebooks under the rubric of “Open Notebook Science”. Connotea is a tagging platform (they call it “reference management”) for scientists. These tools help “scale up” and direct the scientific conversation, as explained by Michael Nielsen:

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?

Stepping back, what tools like blogs, open notebooks and their descendants enable is filtered access to new sources of information, and to new conversation. The net result is a restructuring of expert attention. This is important because expert attention is the ultimate scarce resource in scientific research, and the more efficiently it can be allocated, the faster science can progress.

Michael Nielsen, “Doing science online”, http://michaelnielsen.org/blog/doing-science-online/

OA and adoption of web tools are only the first steps toward utilizing digital networks for scientific collaboration. Science is increasingly computational and data-intensive: access to a completed journal article may not contribute much to allowing other researcher’s to build upon one’s work — that requires publication of all code and data used during the research used to produce the paper. Publishing the entire “resarch compendium” under apprpriate terms (eg usually public domain for data, a free software license for software, and a liberal Creative Commons license for articles and other content) and in open formats has recently been called “reproducible research” — in computational fields, the publication of such a compendium gives other researches all of the tools they need to build upon one’s work.

Standards are also very important for enabling scientific collaboration, and not just coarse standards like RSS. The Semantic Web and in particular ontologies have sometimes been ridiculed by consumer web developers, but they are necessary for science. How can one treat the world’s scientific literature as a database if it isn’t possible to identify, for example, a specific chemical or gene, and agree on a name for the chemical or gene in question that different programs can use interoperably? The biological sciences have taken a lead in implementation of semantic technologies, from ontology development and semantic databsases to inline web page annotation using RDFa.

Of course all of science, even most of science, isn’t digital. Collaboration may require sharing of physical materials. But just as online stores make shopping easier, digital tools can make sharing of scientific materials easier. One example is the development of standardized Materials Transfer Agreements accompanied by web-based applications and metadata, potentially a vast improvement over the current choice between ad hoc sharing and highly bureaucratized distribution channels.

Somewhere between open science and business (both as in for-profit business and business as usual) is “Open Innovation” which refers to a collection of tools and methods for enabling more collaboration, for example crowdsourcing of research expertise (a company called InnoCentive is a leader here), patent pools, end-user innovation (documented especially by Erik von Hippel in Democratizing Innovation), and wisdom of the crowds methods such as prediction markets.

Reputation is an important question for many forms of collaboration, but particularly in science, where careers are determined primarily by one narrow metric of reputation — publication. If the above phenomena are to reach their full potential, they will have to be aligned with scientific career incentives. This means new reputation systems that take into account, for example, re-use of published data and code, and the impact of granular online contributions, must be developed and adopted.

From the grand scientific enterprise to business enterprise modern collaboration tools hold great promise for increasing the rate of discovery, which sounds prosaic, but may be our best tool for solving our most vexing problems. John Wilbanks, Vice President for Science at Creative Commons often makes the point like this: “We don’t have any idea how to solve cancer, so all we can do is increase the rate of discovery so as to increase the probability we’ll make a breakthrough.”

Science 2.0 also holds great promise for allowing the public to access current science, and even in some cases collaborate with professional researchers. The effort to apply modern collaboration tools to science may even increase the rate of discovery of innovations in collaboration!

Collaborative Futures 2

Wednesday, January 20th, 2010

Day 2 of the Collaborative Futures book sprint saw the writing of a number of chapters and the creation of a much more fleshed out table of contents. I spent too much time interrupted by other work and threading together a chapter (feels more like a long blog post) on “Other People’s Computers” from old sources and the theme of supporting collaboration. The current draft is pasted below because that’s easier than extracting links to sources.

Another tangential observation about the group: I noted a fair amount of hostility toward Wikipedia, the Wikimedia Foundation, and Mediawiki on the notion that they have effectively sucked the air out of other potential projects and models of collaboration, even other wiki software. Of course I am a huge fan of Wikipedia — I think its centralization has allowed it to scale in a way not possible otherwise — it has made the community-centric collaboration pie bigger — and we are very fortunate that such a dominant service has gotten so much right, at least from a freedom perspective. However, the underlying criticism is not without merit, and I tried to incorporate a productive and very brief version of it into the draft.

Also see Mushon Zer-Aviv’s entertaining post on day 2.

Other People’s Computers

Partly because they’re location-transparent and web-integrated, browser apps support social interaction more easily than desktop apps.

Kragen Sitaker, “What’s wrong with HTTP”, http://lists.canonical.org/pipermail/kragen-tol/2006-November/000841.html

Much of what we call collaboration occurs on web sites (more generally, software services), particularly collaboration among many distributed users. Direct support for collaboration, and more broadly for social features, is simply easier in a centralized context. It is possible to imagine a decentralized Wikipedia or Facebook, but building such services with sufficient ease of use, features, and robustness to challenge centralized web sites is a very difficult challenge.

Why does this matter? The web is great for collaboration, let’s celebrate that! However, making it relatively easy for people to work together in the specific way offered by a web site owner is a rather impoverished vision of what the web (or more generally, digital networks) could enable, just as merely allowing people to run programs on their computers in the way program authors intended is an impoverished vision of personal computing.

Free software allows users control their own computing and to help other users by retaining the ability to run, modify, and share software for any purpose. Whether the value of this autonomy is primarily ethical, as often framed by advocates of the term free software, or primarily practical, as often framed by advocates of the term open source, any threat to these freedoms has to be of deep concern to anyone interested in the future of collaboration, both in terms what collaborations are possible and what interests control and benefit from those collaborations.

Web sites and special-purpose hardware […] do not give me the same freedoms general-purpose computers do. If the trend were to continue to the extent the pundits project, more and more of what I do today with my computer will be done by special-purpose things and remote servers.

What does freedom of software mean in such an environment? Surely it’s not wrong to run a Web site without offering my software and databases for download. (Even if it were, it might not be feasible for most people to download them. IBM’s patent server has a many-terabyte database behind it.)

I believe that software — open-source software, in particular — has the potential to give individuals significantly more control over their own lives, because it consists of ideas, not people, places, or things. The trend toward special-purpose devices and remote servers could reverse that.

Kragen Sitaker, “people, places, things, and ideas “, http://lists.canonical.org/pipermail/kragen-tol/1999-January/000322.html

What are the prospects and strategies for keeping the benefits of free software in an age of collaboration mediated by software services? One strategy, argued for in “The equivalent of free software for online services” by Kragen Sitaker (see http://lists.canonical.org/pipermail/kragen-tol/2006-July/000818.html), is that centralized services need to be re-implemented as peer-to-peer services that can be run as free software on computers under users’ control. This is an extremely interesting strategy, but a very long term one, for it is hard, being at least both a computer science and a social challenge.

Abstinence from software services may be a naive and losing strategy in both the short and long term. Instead, we can both work on decentralization as well as attempt to build services that respect user’s autonomy:

Going places I don’t individually control — restaurants, museums, retail stores, public parks — enriches my life immeasurably. A definition of “freedom” where I couldn’t leave my own house because it was the only space I had absolute control over would not feel very free to me at all. At the same time, I think there are some places I just don’t want to go — my freedom and physical well-being wouldn’t be protected or respected there.

Similarly, I think that using network services makes my computing life fuller and more satisfying. I can do more things and be a more effective person by spring-boarding off the software on other peoples’ computers than just with my own. I may not control your email server, but I enjoy sending you email, and I think it makes both of our lives better.

And I think that just as we can define a level of personal autonomy that we expect in places that belong to other people or groups, we should be able to define a level of autonomy that we can expect when using software on other people’s computers. Can we make working on network services more like visiting a friends’ house than like being locked in a jail?

We’ve made a balance between the absolute don’t-use-other-people’s-computers argument and the maybe-it’s-OK-sometimes argument in the Franklin Street Statement. Time will tell whether we can craft a culture around Free Network Services that is respectful of users’ autonomy, such that we can use other computers with some measure of confidence.

Evan Prodromou, “RMS on Cloud Computing: “Stupidity””, CC BY-SA, http://autonomo.us/2008/09/rms-on-cloud-computing-stupidity/

The Franklin Street Statement on Freedom and Network Services is a beginning group attempt to distill actions users, service providers (the “other people” here), and developers should take to retain the benefits of free software in an era of software services:

The current generation of network services or Software as a Service can provide advantages over traditional, locally installed software in ease of deployment, collaboration, and data aggregation. Many users have begun to rely on such services in preference to software provisioned by themselves or their organizations. This move toward centralization has powerful effects on software freedom and user autonomy.

On March 16, 2008, a workgroup convened at the Free Software Foundation to discuss issues of freedom for users given the rise of network services. We considered a number of issues, among them what impacts these services have on user freedom, and how implementers of network services can help or harm users. We believe this will be an ongoing conversation, potentially spanning many years. Our hope is that free software and open source communities will embrace and adopt these values when thinking about user freedom and network services. We hope to work with organizations including the FSF to provide moral and technical leadership on this issue.

We consider network services that are Free Software and which share Free Data as a good starting-point for ensuring users’ freedom. Although we have not yet formally defined what might constitute a ‘Free Service’, we do have suggestions that developers, service providers, and users should consider:

Developers of network service software are encouraged to:

  • Use the GNU Affero GPL, a license designed specifically for network service software, to ensure that users of services have the ability to examine the source or implement their own service.
  • Develop freely-licensed alternatives to existing popular but non-Free network services.
  • Develop software that can replace centralized services and data storage with distributed software and data deployment, giving control back to users.

Service providers are encouraged to:

  • Choose Free Software for their service.
  • Release customizations to their software under a Free Software license.
  • Make data and works of authorship available to their service’s users under legal terms and in formats that enable the users to move and use their data outside of the service. This means:
    • Users should control their private data.
    • Data available to all users of the service should be available under terms approved for Free Cultural Works or Open Knowledge.

Users are encouraged to:

  • Consider carefully whether to use software on someone else’s computer at all. Where it is possible, they should use Free Software equivalents that run on their own computer. Services may have substantial benefits, but they represent a loss of control for users and introduce several problems of freedom.
  • When deciding whether to use a network service, look for services that follow the guidelines listed above, so that, when necessary, they still have the freedom to modify or replicate the service without losing their own data.

Franklin Street Statement on Freedom and Network Services, CC BY-SA, http://autonomo.us/2008/07/franklin-street-statement/

As challenging as the Franklin Street Statement appears, additional issues must be addressed for maximum autonomy, including portable identifiers:

A Free Software Definition for the next decade should focus on the user’s overall autonomy- their ability not just to use and modify a particular piece of software, but their ability to bring their data and identity with them to new, modified software.

Such a definition would need to contain something like the following minimal principles:

  1. data should be available to the users who created it without legal restrictions or technological difficulty.
  2. any data tied to a particular user should be available to that user without technological difficulty, and available for redistribution under legal terms no more restrictive than the original terms.
  3. source code which can meaningfully manipulate the data provided under 1 and 2 should be freely available.
  4. if the service provider intends to cease providing data in a manner compliant with the first three terms, they should notify the user of this intent and provide a mechanism for users to obtain the data.
  5. a user’s identity should be transparent; that is, where the software exposes a user’s identity to other users, the software should allow forwarding to new or replacement identities hosted by other software.

Luis Villia, “Voting With Your Feet and Other Freedoms”, CC BY-SA, http://tieguy.org/blog/2007/12/06/voting-with-your-feet-and-other-freedoms/

Fortunately the oldest and at least until recently most ubiqitous network service — email — accomodates portable identifiers. (Not to mention that email is the lowest common denominator for much collaboration — sending attachments back and forth.) Users of a centralized email service like Gmail can retain a great deal of autonomy if they use an email address at a domain they control and merely route delivery to the service — though of course most users use the centralized provier’s domain.

It is worth noting that the more recent and widely used if not ubiquitous instant messaging protocol XMPP as well as the brand new and little used Wave protocol are architected similar to email, though use of non-provider domains seems even less common, and in the case of Wave, Google is currently the only service provider.

It may be valuable to assess software services from the respect of community autonomy as well as user autonomy. The former may explicitly note  requirements for the product of collaboration — non-private data, roughly — as well as service governance:

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Mike Linksvayer, “Constitutionally open services”, CC0, https://gondwanaland.com/mlog/2006/07/06/constitutionally-open-services/

Software services are rapidly developing and subject to much hype — referred to by buzzwords such as cloud computing. However, some of the most potent means of encouraing autonomy may be relatively boring — for example, making it easier to maintain one’s own computer and deploy slightly customized software in a secure and foolproof fashion. Any such development helps traditional users of free software as well as makes doing computing on one’s own computer (which may be a “personal server” or virtual machine that one controls) more attractive.

Perhaps one of the most hopeful trends is relatively widespead deployment by end users of free software web applications like WordPress and MediaWiki. StatusNet, free software for microblogging, is attempting to replicate this adoption success, but also includes technical support for a form of decentralization (remote subscription) and a legal requirement for service providers to release modifications as free software via the AGPL.

This section barely scratches the surface of the technical and social issues raised by the convergence of so much of our computing, in particular computing that facilitates collaboration, to servers controlled by “other people”, in particular a few large service providers. The challenges of creating autonomy-respecting alternatives should not be understated.

One of those challenges is only indirectly technical: decentralization can make community formation more difficult. To the extent the collaboration we are interested in requires community, this is a challenge. However, easily formed but inauthentic and controlled community also will not produce the kind of collaboration we are interested in.

We should not limit our imagination to the collaboration faciliated by the likes of Facebook, Flickr, Google Docs, Twitter, or other “Web 2.0” services. These are impressive, but then so was AOL two decades ago. We should not accept a future of collaboration mediated by centralized giants now, any more than we should have been, with hindsight, happy to accept information services dominated by AOL and its near peers. 

Wikipedia is both held up as an exemplar of collaboration and is a free-as-in-freedom service: both the code and the content of the service are accessible under free terms. It is also a huge example of community governance in many respects. And it is undeniably a category-exploding success: vastly bigger and useful in many more ways than any previous encyclopedia. Other software and services enabling autonomous collaboration should set their sites no lower — not to merely replace an old category, but to explode it.

However, Wikipedia (and its MediaWiki software) are not the end of the story. Merely using MediaWiki for a new project, while appropriate in many cases, is not magic pixie dust for enabling collaboration. Affordances for collaboration need to be built into many different types of software and services. Following Wikipedia’s lead in autonomy is a good idea, but many experiments should be encouraged in every other respect. One example could be the young and relatively domain-specific collaboration software that this book is being written with, Booki.

Software services have made “installation” of new software as simple as visiting a web page, social features a click, and provide an easy ladder of adoption for mass collaboration. They also threaten autonomy at the individual and community level. While there are daunting challenges, meeting them means achieving “world domination” for freedom in the most important means of production — computer-mediated collaboration — something the free software movement failed to approach in the era of desktop office software.

Collaborative Futures 1

Monday, January 18th, 2010

Day 1 of the Collaborative Futures book sprint was spent with the participants introducing themselves and their relevant projects and thoughts, grouping of points of interest recorded on sticky notes by all during the introduction, and distillation into a high level table of contents.

The other participants had too many interesting things to say to catalog here — check out their sites:

Incidentally, I was fairly pleased to see 5 participants running Linux (counting Adam Hyde, who doesn’t seem to have a blog, and me) and only 2 running OS X. All also are doing interesting Creative Commons licensed projects, not to mention mostly avoiding licenses with the NonCommercial term.

A good portion of the introductory discussion concerned free software and free culture, leading to a discussion of how to include them in the table of contents — the tentative decision is to not include them explicitly, as they would be referenced in various ways throughout. I believe the tentative high level table of contents looks like this:

This doesn’t adequately give an impression of much progress on day 1 — I think we’re in a fairly good position to begin writing chapters tomorrow morning, and we finished right at midnight.

Also see day 0 posts from Michael Mandiberg, Mushon Zer-Aviv, and me.

Collaborative Futures 0

Sunday, January 17th, 2010

FLOSS Manuals has produced numerous excellent free manuals for free software, as the name implies. Now, led by the excellent Adam Hyde, they’re branching out to produce free books on other subjects via their approximately sprint+wiki methodology. Appropriately, and recursively, one of these books, maybe the first, addresses the future of collaboration, to be titled Collaborative Futures.

I’ve arrived in Berlin to help write that book over the next five days — with several others in person and hopefully a significant online contingent. (I understand online participation instructions will be published Tuesday, will link to them here.)

I think I’ve met Adam Hyde a few times before, but first had significant conversations at Wikimania last year (check out his presentation, lots of deep observations about cultural production and freedom). When he later emailed to recruit me to this book sprint, the subject was to be the future of free culture. That would be a fine book, but I’m excited about the change, if only because the future of collaboration may be the most important determinant of how free culture is, as I’ve written for another book project:

Generally culture is much more varied than software, and the success of free culture projects relative to free software projects may reflect this. It seems that free culture is at least a decade behind free software, with at least one major exception—Wikipedia. Notably, Wikipedia to a much greater extent than most cultural works has requirements for mass collaboration and maintenance similar to those of software. Even more notably, Wikipedia has completely transformed a sector in a way that free software has not.

One, perhaps the, key question for free culture advocates is how more cultural production can gain WikiNature—made through wiki-like processes of community curation, or more broadly, peer production. To the extent this can be done, free culture may “win” faster than free software—for consuming free culture does not require installing software with dependencies, in many cases replacing an entire operating system, and contributing often does not require as specialized skills as contributing to free software often does.

However, the import of the future of collaboration for freedom goes well beyond its import for free culture, and indeed, its import goes well beyond freedom. Perhaps nobody other than myself will have noticed the relevance of many of the themes I’ve written about at this blog, but for anyone who has, you may particularly enjoy an interview with Mushon Zer-Aviv, one of the other sprint participants.

Photos and an interview from my only previous visit to Berlin in October, 2007.