Post Open Services

Mozilla $300m/year for freedom

Thursday, December 22nd, 2011

More Mozilla ads by Henrik Moltke / CC BY

Congratulations to Mozilla on their $300m/year deal with Google, which will more than double current annual revenue. I’ve always thought people predicting doom for Mozilla if Google failed to renew were all wrong — others would be happy to pay for the default search position; probably less since Microsoft, Yahoo, and others make less than Google per ad view, but it’d still be a very substantial amount — and the link article hints that a Microsoft bid drove the price up.

There’s always a risk that Mozilla won’t spend the money well, but I’m pretty confident that they will. Firefox is excellent, and in 2011 has gotten more excellent, faster, and I think many of the other projects they’re doing are really important, and on the right track (insofar as I’m qualified to discern, which is not much), for example BrowserID. Even in small and hopelessly annoying things, like licensing, I think Mozilla is doing good. (Bias: Mozilla has donated to my employer.)

I’m no longer enthused about the possibility of huge resources for progress toward Wikimedia’s vision from advertising on Wikipedia. Since I was last on that bandwagon, it has become even less of a possibility in anything but the distant future: Wikimedia’s donation campaigns have gone very well, adequately funding its operating mission, and lack of advertising has become even more part of Wikimedia’s messaging; I’ve also become more concerned (not in particular to Wikimedia) about the institutional corruption risks previously blogged by Peter McCluskey and Timothy B. Lee. (Note these objections don’t apply to Mozilla: its significant revenue has always been advertising-based; very roughly its revenues are already 10x those of Wikimedia’s; and it is also building up an individual donor program, which I agree is often the healthiest revenue for a nonprofit.)

But I still very much think freedom needs massive, ongoing resource infusions, in the right institutional framework. I celebrate the tremendous benefits of the FLOSS community achieves without massive, concentrated, ongoing resource infusions, but I also admit that the web likely would be much worse, much less webby, and much less free without concentrated resources at Mozilla over the last several years.

Thank you Mozillians, and congratulations. I have very high expectations for your contributions over the next years to the web and society, in particular where more freedom and security are obviously needed such as mobile and software services. Such would be just a start. As computation permeates everything, and digital freedom becomes the most important political issue, the resources of many Mozillas are needed. More on that, soon.

7 FLOSS trends of the past 37.5 months

Monday, September 12th, 2011

A prospective, from memory, high level summary of trends in Free/Libre/Open Source Software, in the last 37.5* months:

Design. Many major free software projects have turned in some form to design[er]-driven UX/UI/look-and-feel/etc. Not all (any?) are acclaimed successes yet, but in a few years “open design” may go from paradoxical to hot.

Diversity. Lack thereof has become recognized as a major impediment to FLOSS potential. Existing cultures will change as participation expands and vice versa.

Distributed version control systems. Have changed the way many developers and projects work.

Governance. “Open by rule” governance that treat all participants equally and are run transparently has become recognized as a crucial part of making FLOSS FLOSSY, in addition to (and congruent with) FLOSS licensing. However, some advocate for corporate (usually) controlled projects in order to obtain corporate resources for FLOSS development. GPLv3 has seen strong uptake.

Mobile. Unlike the desktop a mostly free system (Android) is very popular, but also unlike the desktop, mobile devices with approaching 100% FLOSS[Y] software more or less don’t exist. The “appstore” phenomena is recognized as a threat to FLOSS, but it is totally unclear how that will play out.

Net services. Apart from Wikipedia, none of the dominant services people access through the web are free software. Most are largely powered by free software, and companies that run the services contribute much to back-end software, but the services users directly access are proprietary and centralized, with no interoperability among them. FLOSS and federated social web services have made strides but there is much significant technical work to do, and the network effects of the dominant proprietary services are daunting.

Open web. In 2008, Firefox 3 had just been released and Flash was ubiquitous. The “open web” is in much better shape now. It is the platform with the most innovation, is almost entirely based on open standards. As of fairly recently, the prognosis even for open web video looks good. One of the leading browsers, Firefox, is free software, and the free software WebKit renderer powers most others, while IE slowly declines.

I’ve surely missed things, some intentionally (patent threat), some out of partial ignorance (e.g., I don’t have any sense of how much has changed in the last 3 years for FLOSS as a grant, procurement, regulatory, or other “policy” requirement, but know it could be important).


*37.5 months ago Creative Commons held its previous global meeting. Another will be held next week. I’m organizing 2.33** sessions, one of which will touch on movements near Creative Commons (“Where are We?”) and another of which concerns these in depth (or as much as is possible in 80 minutes; “CC’s Role in the Global Commons Movement”).

The CC-specific parts of these sessions will be fairly detailed and for the first one, possibly more interesting to insiders (many of the participants have been involved in CC affiliate projects for most of CC’s history).

I’d also like to convey, perhaps in as little as one slide, the big trends of the last 3 years in a few related areas, without any details. These areas and their trends lead, inform, reinforce, and depend on CC’s work in varying measures, so I think the CC community should understand them at a high level at least, such that the most relevant can be more closely learned from or cooperated with.

**I’m also organizing 1/3 of a session on issues to consider for version 4.0 of the CC license suite; the part I’m organizing concerns the non-commercial clause used by some of those licenses. I promise it will be much more fun than a report on that topic.

Creative Commons hiring CTO

Monday, July 11th, 2011

See my blog post on the CC site for more context.

Also thanks to Nathan Yergler, who held the job for four years. I really miss working with Nathan. His are big shoes to fill, but also his work across operations, applications, standards, and relationships set the foundation for the next CTO to be very successful.

Semantic ref|pingback for re-use notification

Sunday, May 15th, 2011

Going back probably all the way to 2003 (I can’t easily pinpoint, as obvious mail searches turn up lots of hand-wringing about structured data in/for web pages, something which persists to this day) people have suggested using something like trackback to notify that someone has [re]used a work, as encouraged under one of the Creative Commons licenses. Such notification could be helpful, as people often would like to know someone is using their work, and might provide much better coverage than finding out by happenstance or out-of-band (e.g., email) notification and not cost as much as crawling a large portion of the web and performing various medium-specific fuzzy matching algorithms on the web’s contents.

In 2006 (maybe 2005) Victor Stone implemented a re-use notification (and a bit more) protocol he called the Sample Pool API. Several audio remix sites (including ccMixter, for which Victor developed the API; side note: read his ccMixter memoir!), but it didn’t go beyond that, probably in part because it was tailored to a particular genre of sites, and another part because it wasn’t clear how to do correctly, generally, get adoption, sort out dependencies (see hand-wringing above), and resource/prioritize.

I’ve had in mind to blog about re-use notification for years (maybe I already have, and forgot), but right now I’m spurred to by skimming Henry Story and Andrei Sambra’s Friending on the Social Web, which is largely about semantic notifications. Like them, I need to understand what the OStatus stack has to say about this. And I need to read their paper closely.

Ignorance thusly stated, I want to proclaim the value of refback. When one follows a link, one’s user agent (browser) often will send with the request for the linked page (or other resource) the referrer (the page with the link one just followed). In some cases, a list of pages linking to one’s resources that might be re-used can be rather valuable if one wants to bother manually looking at referrers for evidence of re-use. For example, Flickr provides a daily report on referrers to one’s photo pages. I look at this report for my account occasionally and have manually populated a set of my re-used photos largely by this method. This is why I recently noted that the (super exciting) MediaGoblin project needs excellent reporting.

Some re-use discovery via refback could be automated. My server (and not just my server, contrary to Friending on the Social Web; could be outsourced via javascript a la Google Analytics and Piwik) could crawl the referrer and look for structured data indicating re-use at the referrer (e.g., my page or a resource on it is subject or object of relevant assertions, e.g., dc:source) and automatically track re-uses discovered thusly.

A pingback would tell my server (or service I have delegated to) affirmatively about some re-use. This would be valuable, but requires more from the referring site than merely publishing some structured data. Hopefully re-use pingback could build upon the structured data that would be utilized by re-use refback and web agents generally.

After doing more reading, I think my plan to to file the appropriate feature requests for MediaGoblin, which seems the ideal software to finally progress these ideas. A solution also has obvious utility for oft-mooted [open] data/education/science scenarios.

IE6 is a stark reminder to developers of what a Web monoculture looks like. We need to remember it.

Tuesday, March 22nd, 2011

The title of this post quotes Evan Prodromou.

See Internet Explorer 6 criticism for the baneful details.

Firefox broke the monoculture. Today is a good day to remember, and celebrate, as Firefox 4 is released. I’ve been using alphas and betas for many months; highly recommend.

Given the quote from Prodromou, founder of identi.ca/StatusNet and first among federated social web equals, it’s also a good idea to remember that many of the services that dominate their niches on the web are themselves monocultures. It was really great yesterday to see the EFF explain and get behind the federated social web.

Be a good citizen today — here’s another helpful and current link in that regard.

Federated Social Web Status[Net]

Friday, December 31st, 2010

Evan Prodromou just published his Federated Social Web top 10 stories of 2010. It’s a great list, go read — readers who aren’t already familiar with Prodromou, StatusNet, identi.ca, OStatus, etc. probably will have missed many of the stories — and they’re extremely important for the long-term future of the web, even if there are presently far too few zeros following the currency symbol to make them near-term major news (just like early days of the web, email, and the internet).

I suggest the following additions.

Censorship of dominant non-federated social web sites (e.g., Facebook, Twitter, YouTube) occurred around the world. While totally reprehensible, and surely one of the top social web stories of 2010 by itself, one of its effects makes it a top story for the federated social web — decentralization is one of the ways of “routing around” censorship. I’d love to have mountains more evidence, but perhaps this is happening.

Perhaps Evan did not want to self-promote in his top 10, but I consider the status of his company, its services, the software they run (all called ), and the community around all three, to be extremely important data points on the status of the federated social web, and thus inherently top stories for 2010 (and they will be again in 2011, even if they completely fail, which would be a sad top story).

I hope that Evan/StatusNet post their own 2010 summary, or the community develops one on the StatusNet wiki, but very briefly: The company obtained another round of funding and from the perspective of an outsider, appears to be progressing nicely on enterprise and premium hosting products. The StatusNet cloud hosts thousands of (premium and gratis) instances, and savvy people are self-hosting, mirroring the well-established wordpress.com/WordPress pattern. The core StatusNet software made great strides (I believe seven 0.9.x releases), obtained an add-ons directory, and early support for non-microblogging features, e.g., social bookmarking and generic social networking (latter Evan did mention as a non-top-10 story; of course any such features are federated “for free”). By the way, see my post Control Yourself, Follow Evan for the beginning story, way back in 2008.

2010 also saw what I consider disappointments in the federated social web space, each of which I have high hopes will be corrected in the next year — perhaps I’ll even do something to help:

StatusNet lacks full data portability and account migration.

Nobody has yet taken up the mantle of building a federated replacement for Flickr.

Unclear federated social web spam defenses are good enough.

Nobody is doing anything interesting with reputation on the federated social web — no, make that, on the social web. This is a major befuddlement I’ve had since (2002), at least. had an excuse as the first “social network”, (1999) innovated, then nothing. Nothing!

Far too few people are aware of the challenges and opportunities of maintaining and expanding software freedom/user autonomy in the age of networked services, a general problem of which the federated social web is an important case.

Finally, a couple not-yet-stories for the federated social web.

Facebook and Twitter (especially Facebook) seem to have consolidated their dominant positions in nearly every part of the world, having surpassed regional leads of the likes of Orkut (Brazil and India), Bebo (UK), MySpace (US), Friendster (Southeast Asia), etc. and would-be competitors such as shut down (e.g., Jaiku and Plurk) or considered disappointing (e.g., Google Buzz). However, it seems there are plenty of relatively new regionally-focused services, some of which may already be huge but under the radar of English-speaking observers. An example is , Sina.com’s microblogging service, which I would not have heard of in 2010 had I not seen it in use at Sharism Forum in Shanghai. It’s possible that some of these are advantaged by censorship of global services — see above — and cooperation with local censors. Opportunity? Probably only long-term or opportunistic.

Despite their high cultural relevance and somewhat ambiguous status, I don’t know of many © disputes around tweets, or short messages generally. Part of the reason must be that Twitter and Facebook are primarily silos, and use within those silos is agreed to via their terms of service. I’m very happy that StatusNet has from the beginning take precaution against copyright interfering with the federated case — notices on StatusNet platforms are released under the permissive Creative Commons Attribution license (all uses permitted in advance, requiring only credit), which clarifies things to the extent copyright restricts, and doesn’t interfere to the extent it doesn’t. (Also note that copyright is a major challenge for the social web in general, even its silos — see YouTube, which ought be considered part of the social web.)

All the best to Evan Prodromou and other federated social web doers in 2011!

As demonstrated above, I cannot write a short blog post, which puts a crimp on my blogging. Follow me on StatusNet’s identi.ca service for lots of short updates.

Collaborative Futures 2

Wednesday, January 20th, 2010

Day 2 of the Collaborative Futures book sprint saw the writing of a number of chapters and the creation of a much more fleshed out table of contents. I spent too much time interrupted by other work and threading together a chapter (feels more like a long blog post) on “Other People’s Computers” from old sources and the theme of supporting collaboration. The current draft is pasted below because that’s easier than extracting links to sources.

Another tangential observation about the group: I noted a fair amount of hostility toward Wikipedia, the Wikimedia Foundation, and Mediawiki on the notion that they have effectively sucked the air out of other potential projects and models of collaboration, even other wiki software. Of course I am a huge fan of Wikipedia — I think its centralization has allowed it to scale in a way not possible otherwise — it has made the community-centric collaboration pie bigger — and we are very fortunate that such a dominant service has gotten so much right, at least from a freedom perspective. However, the underlying criticism is not without merit, and I tried to incorporate a productive and very brief version of it into the draft.

Also see Mushon Zer-Aviv’s entertaining post on day 2.

Other People’s Computers

Partly because they’re location-transparent and web-integrated, browser apps support social interaction more easily than desktop apps.

Kragen Sitaker, “What’s wrong with HTTP”, http://lists.canonical.org/pipermail/kragen-tol/2006-November/000841.html

Much of what we call collaboration occurs on web sites (more generally, software services), particularly collaboration among many distributed users. Direct support for collaboration, and more broadly for social features, is simply easier in a centralized context. It is possible to imagine a decentralized Wikipedia or Facebook, but building such services with sufficient ease of use, features, and robustness to challenge centralized web sites is a very difficult challenge.

Why does this matter? The web is great for collaboration, let’s celebrate that! However, making it relatively easy for people to work together in the specific way offered by a web site owner is a rather impoverished vision of what the web (or more generally, digital networks) could enable, just as merely allowing people to run programs on their computers in the way program authors intended is an impoverished vision of personal computing.

Free software allows users control their own computing and to help other users by retaining the ability to run, modify, and share software for any purpose. Whether the value of this autonomy is primarily ethical, as often framed by advocates of the term free software, or primarily practical, as often framed by advocates of the term open source, any threat to these freedoms has to be of deep concern to anyone interested in the future of collaboration, both in terms what collaborations are possible and what interests control and benefit from those collaborations.

Web sites and special-purpose hardware […] do not give me the same freedoms general-purpose computers do. If the trend were to continue to the extent the pundits project, more and more of what I do today with my computer will be done by special-purpose things and remote servers.

What does freedom of software mean in such an environment? Surely it’s not wrong to run a Web site without offering my software and databases for download. (Even if it were, it might not be feasible for most people to download them. IBM’s patent server has a many-terabyte database behind it.)

I believe that software — open-source software, in particular — has the potential to give individuals significantly more control over their own lives, because it consists of ideas, not people, places, or things. The trend toward special-purpose devices and remote servers could reverse that.

Kragen Sitaker, “people, places, things, and ideas “, http://lists.canonical.org/pipermail/kragen-tol/1999-January/000322.html

What are the prospects and strategies for keeping the benefits of free software in an age of collaboration mediated by software services? One strategy, argued for in “The equivalent of free software for online services” by Kragen Sitaker (see http://lists.canonical.org/pipermail/kragen-tol/2006-July/000818.html), is that centralized services need to be re-implemented as peer-to-peer services that can be run as free software on computers under users’ control. This is an extremely interesting strategy, but a very long term one, for it is hard, being at least both a computer science and a social challenge.

Abstinence from software services may be a naive and losing strategy in both the short and long term. Instead, we can both work on decentralization as well as attempt to build services that respect user’s autonomy:

Going places I don’t individually control — restaurants, museums, retail stores, public parks — enriches my life immeasurably. A definition of “freedom” where I couldn’t leave my own house because it was the only space I had absolute control over would not feel very free to me at all. At the same time, I think there are some places I just don’t want to go — my freedom and physical well-being wouldn’t be protected or respected there.

Similarly, I think that using network services makes my computing life fuller and more satisfying. I can do more things and be a more effective person by spring-boarding off the software on other peoples’ computers than just with my own. I may not control your email server, but I enjoy sending you email, and I think it makes both of our lives better.

And I think that just as we can define a level of personal autonomy that we expect in places that belong to other people or groups, we should be able to define a level of autonomy that we can expect when using software on other people’s computers. Can we make working on network services more like visiting a friends’ house than like being locked in a jail?

We’ve made a balance between the absolute don’t-use-other-people’s-computers argument and the maybe-it’s-OK-sometimes argument in the Franklin Street Statement. Time will tell whether we can craft a culture around Free Network Services that is respectful of users’ autonomy, such that we can use other computers with some measure of confidence.

Evan Prodromou, “RMS on Cloud Computing: “Stupidity””, CC BY-SA, http://autonomo.us/2008/09/rms-on-cloud-computing-stupidity/

The Franklin Street Statement on Freedom and Network Services is a beginning group attempt to distill actions users, service providers (the “other people” here), and developers should take to retain the benefits of free software in an era of software services:

The current generation of network services or Software as a Service can provide advantages over traditional, locally installed software in ease of deployment, collaboration, and data aggregation. Many users have begun to rely on such services in preference to software provisioned by themselves or their organizations. This move toward centralization has powerful effects on software freedom and user autonomy.

On March 16, 2008, a workgroup convened at the Free Software Foundation to discuss issues of freedom for users given the rise of network services. We considered a number of issues, among them what impacts these services have on user freedom, and how implementers of network services can help or harm users. We believe this will be an ongoing conversation, potentially spanning many years. Our hope is that free software and open source communities will embrace and adopt these values when thinking about user freedom and network services. We hope to work with organizations including the FSF to provide moral and technical leadership on this issue.

We consider network services that are Free Software and which share Free Data as a good starting-point for ensuring users’ freedom. Although we have not yet formally defined what might constitute a ‘Free Service’, we do have suggestions that developers, service providers, and users should consider:

Developers of network service software are encouraged to:

  • Use the GNU Affero GPL, a license designed specifically for network service software, to ensure that users of services have the ability to examine the source or implement their own service.
  • Develop freely-licensed alternatives to existing popular but non-Free network services.
  • Develop software that can replace centralized services and data storage with distributed software and data deployment, giving control back to users.

Service providers are encouraged to:

  • Choose Free Software for their service.
  • Release customizations to their software under a Free Software license.
  • Make data and works of authorship available to their service’s users under legal terms and in formats that enable the users to move and use their data outside of the service. This means:
    • Users should control their private data.
    • Data available to all users of the service should be available under terms approved for Free Cultural Works or Open Knowledge.

Users are encouraged to:

  • Consider carefully whether to use software on someone else’s computer at all. Where it is possible, they should use Free Software equivalents that run on their own computer. Services may have substantial benefits, but they represent a loss of control for users and introduce several problems of freedom.
  • When deciding whether to use a network service, look for services that follow the guidelines listed above, so that, when necessary, they still have the freedom to modify or replicate the service without losing their own data.

Franklin Street Statement on Freedom and Network Services, CC BY-SA, http://autonomo.us/2008/07/franklin-street-statement/

As challenging as the Franklin Street Statement appears, additional issues must be addressed for maximum autonomy, including portable identifiers:

A Free Software Definition for the next decade should focus on the user’s overall autonomy- their ability not just to use and modify a particular piece of software, but their ability to bring their data and identity with them to new, modified software.

Such a definition would need to contain something like the following minimal principles:

  1. data should be available to the users who created it without legal restrictions or technological difficulty.
  2. any data tied to a particular user should be available to that user without technological difficulty, and available for redistribution under legal terms no more restrictive than the original terms.
  3. source code which can meaningfully manipulate the data provided under 1 and 2 should be freely available.
  4. if the service provider intends to cease providing data in a manner compliant with the first three terms, they should notify the user of this intent and provide a mechanism for users to obtain the data.
  5. a user’s identity should be transparent; that is, where the software exposes a user’s identity to other users, the software should allow forwarding to new or replacement identities hosted by other software.

Luis Villia, “Voting With Your Feet and Other Freedoms”, CC BY-SA, http://tieguy.org/blog/2007/12/06/voting-with-your-feet-and-other-freedoms/

Fortunately the oldest and at least until recently most ubiqitous network service — email — accomodates portable identifiers. (Not to mention that email is the lowest common denominator for much collaboration — sending attachments back and forth.) Users of a centralized email service like Gmail can retain a great deal of autonomy if they use an email address at a domain they control and merely route delivery to the service — though of course most users use the centralized provier’s domain.

It is worth noting that the more recent and widely used if not ubiquitous instant messaging protocol XMPP as well as the brand new and little used Wave protocol are architected similar to email, though use of non-provider domains seems even less common, and in the case of Wave, Google is currently the only service provider.

It may be valuable to assess software services from the respect of community autonomy as well as user autonomy. The former may explicitly note  requirements for the product of collaboration — non-private data, roughly — as well as service governance:

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Mike Linksvayer, “Constitutionally open services”, CC0, https://gondwanaland.com/mlog/2006/07/06/constitutionally-open-services/

Software services are rapidly developing and subject to much hype — referred to by buzzwords such as cloud computing. However, some of the most potent means of encouraing autonomy may be relatively boring — for example, making it easier to maintain one’s own computer and deploy slightly customized software in a secure and foolproof fashion. Any such development helps traditional users of free software as well as makes doing computing on one’s own computer (which may be a “personal server” or virtual machine that one controls) more attractive.

Perhaps one of the most hopeful trends is relatively widespead deployment by end users of free software web applications like WordPress and MediaWiki. StatusNet, free software for microblogging, is attempting to replicate this adoption success, but also includes technical support for a form of decentralization (remote subscription) and a legal requirement for service providers to release modifications as free software via the AGPL.

This section barely scratches the surface of the technical and social issues raised by the convergence of so much of our computing, in particular computing that facilitates collaboration, to servers controlled by “other people”, in particular a few large service providers. The challenges of creating autonomy-respecting alternatives should not be understated.

One of those challenges is only indirectly technical: decentralization can make community formation more difficult. To the extent the collaboration we are interested in requires community, this is a challenge. However, easily formed but inauthentic and controlled community also will not produce the kind of collaboration we are interested in.

We should not limit our imagination to the collaboration faciliated by the likes of Facebook, Flickr, Google Docs, Twitter, or other “Web 2.0” services. These are impressive, but then so was AOL two decades ago. We should not accept a future of collaboration mediated by centralized giants now, any more than we should have been, with hindsight, happy to accept information services dominated by AOL and its near peers. 

Wikipedia is both held up as an exemplar of collaboration and is a free-as-in-freedom service: both the code and the content of the service are accessible under free terms. It is also a huge example of community governance in many respects. And it is undeniably a category-exploding success: vastly bigger and useful in many more ways than any previous encyclopedia. Other software and services enabling autonomous collaboration should set their sites no lower — not to merely replace an old category, but to explode it.

However, Wikipedia (and its MediaWiki software) are not the end of the story. Merely using MediaWiki for a new project, while appropriate in many cases, is not magic pixie dust for enabling collaboration. Affordances for collaboration need to be built into many different types of software and services. Following Wikipedia’s lead in autonomy is a good idea, but many experiments should be encouraged in every other respect. One example could be the young and relatively domain-specific collaboration software that this book is being written with, Booki.

Software services have made “installation” of new software as simple as visiting a web page, social features a click, and provide an easy ladder of adoption for mass collaboration. They also threaten autonomy at the individual and community level. While there are daunting challenges, meeting them means achieving “world domination” for freedom in the most important means of production — computer-mediated collaboration — something the free software movement failed to approach in the era of desktop office software.

Collaborative Futures 1

Monday, January 18th, 2010

Day 1 of the Collaborative Futures book sprint was spent with the participants introducing themselves and their relevant projects and thoughts, grouping of points of interest recorded on sticky notes by all during the introduction, and distillation into a high level table of contents.

The other participants had too many interesting things to say to catalog here — check out their sites:

Incidentally, I was fairly pleased to see 5 participants running Linux (counting Adam Hyde, who doesn’t seem to have a blog, and me) and only 2 running OS X. All also are doing interesting Creative Commons licensed projects, not to mention mostly avoiding licenses with the NonCommercial term.

A good portion of the introductory discussion concerned free software and free culture, leading to a discussion of how to include them in the table of contents — the tentative decision is to not include them explicitly, as they would be referenced in various ways throughout. I believe the tentative high level table of contents looks like this:

This doesn’t adequately give an impression of much progress on day 1 — I think we’re in a fairly good position to begin writing chapters tomorrow morning, and we finished right at midnight.

Also see day 0 posts from Michael Mandiberg, Mushon Zer-Aviv, and me.

The singularity university is open

Saturday, August 1st, 2009

Tuesday afternoon I visited ‘s graduate studies program to participate in a session on open source with (Google Open Source Program) and (WordPress). It was pretty interesting, though not in the way I expected — lots about contemporary licensing issues (which DiBona called roughly “the boring yet intellectually interesting part of his job”, a hilarious characterization in my book), not so much about how open source development will impact the future, nothing about an open source singularity. I sped through slides which include a scattershot of material for people interested in open source, a grand future, and not necessarily familiar with Creative Commons.

Hearing Mullenweg’s commitment to software freedom in person made me feel good about using WordPress. There was some discussion of network services and relatedly the . Mullenweg made a comment along the lines of silos like Facebook being less than ideal (not discussed, but , built on WordPress, as well as , used for SingularityU’s internal social network, are open replacements, though it seems to me that federation a la is needed).

DiBona indicated that Google doesn’t use AGPL’d software internally because it might cause them to share more than they’ve decided to (and they consciously decide to share a lot) while Mullenweg wondered whether complying with the AGPL would be difficult for WordPress deployers, including the question of whether one would need to share configuration files that include passwords. One could argue that such doubts are very self serving for Google and to a lesser extent WordPress.com (which use tons of free software and aren’t forced to share their improvements, though as mentioned, they both share lots), however, I hope that AGPL advocates (including me, with the caveat that I consider the importance of copyleft of whatever strength relative to release under any open license and non-licensing factors an open and understudied — consider possibilities for simulation, econ lab, and natural experiments — question, and I’m happy to change my mind) take them as strong signal that much more information on AGPL compliance is needed — sharing all source of a complex deployed web application is not often a simple thing.

Not explicitly but much more than tangentially related, probably the single most interesting thing said on the panel was Mullenweg saying that any internal WordPress.com developer can push changes to production at any time, and this happens 15-20 times a day, and he wishes he could do this for other deployments. My longstanding guess (not specific to WordPress) is that making deployment from revision control the preferred means of deployment would facilitate both more deployments running the latest changes as well as sharing their own.

I got a sense from questions asked by the students that the current Singularity University program might be more mainstream than the name implies. I understand that each student, or perhaps group of students, is to write a plan for using emerging technology to positively impact the lives of a billion people in one of the areas of health, climate change, or (I forgot the third area) in the next ten years. In any other context those parameters would sound very aggressive. Of course they could be met by first becoming rationalist jedi masters and then turning all available matter into . Alas, ten years is a hurdle.

If the previous paragraph reads snarkily, it is not — I fully support the maximization of computation and the rationality of the same. In any case congratulations to all involved in SingularityU, in particular Bruce Klein, who I know has been working on the concept for a long time. It was also good to see Salim Ismail and David Orban. I’m especially happy to see that SingularityU is attempting to be as open as possible, not least this.

#identica1

Thursday, July 2nd, 2009