Post Open Services

Happy Hacking

Wednesday, March 25th, 2009

Laroia, Linksvayer, RMS
Asheesh Laroia and Mike Linksvayer of Creative Commons accept the 2008 Free Software Foundation Award for Project of Social Benefit from Richard Stallman. Detail of photo by Matt Hins / CC BY-SA.



Icing on the cake of a highly successful Libre Planet Conference. Other highlights included great talks by Evan Prodromou on engineering for free network services and Rob Savoye on , which turns out to be much more than just an Adobe Flash browser plugin replacement, and the free network services unconference.

Addendum 20090330: Audio of Stallman’s talk and the awards ceremony, Asheesh’s writeup.

Free Software: Foundation for a Libre Planet

Wednesday, February 4th, 2009

Support the Free Software Foundation. It’s good for a free planet and you can attend the just announced Libre Planet Conference, March 21-22 in Cambridge, Massachusetts, an outgrowth of the FSF’s annual member meeting.

I’m really excited that the conference will have software freedom and network services as a major focus. This will be the first public conference on the topic, following last year’s meeting from which followed the Franklin Street Declaration and Autonomo.us.

If you enjoyed my rambling call to support Creative Commons a couple months ago, you might enjoy reading Benjamin Mako Hill’s somewhat less rambling call to support the FSF.

I’ve donated to the FSF off and on since at least 1998. You should get started now, if you haven’t already. My only regret (apart from not giving every year) is still not having relevant prediction markets enabling me to be a futarchist donor. I mention that here both because it is a necessary disclaimer for me to make (my philanthropy suggestions are not based on handwaving, not consensus projected impact) and because perhaps my most highly desired free network service is a prediction market exchange. I’ll explain more another day.

HOWTO deploy and upgrade WordPress or any web application

Sunday, August 31st, 2008

Recently Nathan Yergler posted what ought to be the preferred way to install and upgrade WordPress:

First, install WordPress from a Subversion checkout; do:

$ svn co http://svn.automattic.com/wordpress/tags/2.6/

instead of downloading the .zip or .tar.gz file. Configure as directed.

Then, when a new version is available, log into your webhost and run:

$ svn switch http://svn.automattic.com/wordpress/tags/2.6.1/

from your install directory.

I’ve been doing this for ages and consider installing from and overwriting with tarballs on an ongoing basis just short of insanity. Unfortunately the WordPress Subversion Access page says it is for developers only and doesn’t describe using svn switch to upgrade — indeed, what they describe (which will always obtain the very latest, usually unreleased, code checked in by WordPress developers), really is only appropriate for WordPress developers and testers. The MediaWiki site does a much better job but still doesn’t push revision control as the preferred deployment mechanism.

WordPress and MediaWiki were pioneers several years ago in making web application deployment and even upgrade painless relative to what came before (mostly by automating as much database configuration and schema migration as possible), but it may take a new generation to make deployment from revision control systems (preferably distributed) the norm. WikiTrust sets a good example [Update 20090622: Though not a good example of cool URIs, code instructions moved to new location without forwarding.]:

There are two ways of getting WikiTrust: via Git (recommended), or via tarballs (easier at first, but harder to keep up-to-date).

[…]

Our preferred way to distribute the code is via Git. Git enables you to easily get the latest version of the code, including any bug-fixes. Git also makes it easy for you to contribute to the project (see Contributing to WikiTrust for more information).

As I’ve mentioned several times in passing, such practices will facilitate open web applications and other network services.

Us Autonomo!

Monday, July 14th, 2008

Autonomo.us and the Franklin Street Statement on Freedom and Network Services launched today.

I’ve written about the subject of this group and statement a number of times on this blog, starting with Constitutionally Open Services two years ago. I think that post holds up pretty well. Here were my tentative recommendations:

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

I was honored to participate in a summit called by the Free Software Foundation to discuss these issues March of this year, along with far greater thinkers and doers. Autonomo.us and the Franklin Street Statement (named for the FSF’s office address) are the result of continued work among the summit participants, not yet endorsed by the FSF (nor by any other organization). Essentially everything I conjectured above made it into the statement (not due to me, they are fairly obvious points, at least as of 2008, and others made them long before) with the exception of making deployment easier, which is mundane, and service governance issues, which the group did discuss, but inconclusively.

There’s much more to say about this, but for now (and likely for some time, at the rate I write, though this activity did directly inspire me to propose speaking at an upcoming P2P industry summit, which I will early next month–I’m also speaking tomorrow at BALUG and will mention autonomo.us briefly–see info on both engagements) I wanted to address two immediate and fairly obvious critiques.

Brian Rowe wrote:

“Where it is possible, they should use Free Software equivalents that run on their own computer.” This is near Luddite talk… It is almost always possible to use an app on your own comp, but it is so inefficient. Networked online apps are not inherently evil, should you back up your work
offline, yes. Should you have alternative options and data portability, yes. You should fight to impove them. But you should not avoid them like the plauge.

The statement doesn’t advocate avoiding network services–see “Where it is possible”, and most of the statement concerns how network services can be free. However, it is easy to read the sentence Rowe quoted and see Luddism. I hope that to some it instead serves as a challenge, for:

  • Applications that run on your own computer can be networked, i.e., P2P.
  • Your own computer does not only include your laptop and home server, but any hardware you control, and I think that should often include virtual hardware.

Wes Felter wrote:

I see a lot about software licensing and not much about identity and privacy. I guess when all you have is the AGPL everything looks like a licensing problem.

True enough, but lots of people are working on identity and privacy. If the FSF doesn’t work on addressing the threats to freedom as in free software posed by network services, it isn’t clear who would. And I’d suggest that any success free software has in the network services world will have beneficial effects on identity and privacy for users–unless you think these are best served by identity silos and security through obscurity.

Finally, the FSF is an explicitly ideological organization (I believe mostly for the greater good), so the statement (although not yet endorsed by the FSF, I believe all participants are probably FSF members, staff, or directors) language reflect that. However, I suspect by far the most important work to be done to maintain software freedom is technical and pragmatic, for example writing P2P applications, making sharing modified source of network applications a natural part of deployment (greatly eased by the rise of distributed version control), and convincing users and service providers that it is in their interest to expect and provide free/open network services.

I suggest going on to read Evan Prodromou (the doer above) on autonomo.us and the Franklin Street Statement and Rufus Pollock on the Open Software Service Definition, which more or less says the same thing as the FSS in the language of a definition (and using the word open), coordinated to launch at the same time.

Control yourself, follow Evan

Wednesday, July 2nd, 2008

See Evan Prodromou’s post on launching identi.ca, good background reading on open services.

I love the name of Prodromou’s company, Control Yourself. Presumably it is a reference to discussions of user autonomy as a better frame than freedom or openness … for discussions of concerns addressed by free/open source software and its ilk.

You can follow Evan’s microblogging at identi.ca/evan.

I’ve only used Twitter for an ongoing joke that probably nobody gets, but for now I’ll be trying to honestly microblog at identi.ca/mlinksva.

Commoditizing the cloud

Wednesday, April 9th, 2008

Doug Cutting on Cloud: commodity or proprietary?:

As we shift applications to the cloud, do we want our code to remain vendor-neutral? Or would we rather work in silos, where some folks build things to run in the Google cloud, some for the Amazon cloud, and others for the Microsoft cloud? Once an application becomes sufficiently complex, moving it from one cloud to another becomes difficult, placing folks at the mercy of their cloud provider.

I think most would prefer not to be locked-in, that cloud providers instead sold commodity services. But how can we ensure that?

If we develop standard, non-proprietary cloud APIs with open-source implementations, then cloud providers can deploy these and compete on price, availability, performance, etc., giving developers usable alternatives.

That’s exactly right. Cloud providers (selling virtualized cpu and storage) are analogous to hardware vendors. We’re in the pre-PC era, when a developer must write to a proprietary platform, and if one wants to switch vendors, one must port the application.

But such APIs won’t be developed by the cloud providers. They have every incentive to develop proprietary APIs in order to lock folks into their services. Good open-source implementations will only come about if the community makes them a priority and builds them.

I think this is a little too pessimistic. Early leaders may have plenty of incentive to create lockin, but commoditization is another viable business model, one that could even be driven by a heretofore leading proprietary vendor, e.g., the IBM PC, or Microsoft-Yahoo!

Of course the community should care and build the necessary infrastructure so that it is available to enable a potential large cloud provider to pursue the commoditization route and to provide an alternative so long as no such entity steps forward.

Cutting has been working on key parts of the necessary infrastructure; read the rest of his post for more.

gOS: the web takes and gives

Saturday, November 24th, 2007

I imagine thousands of bloggers have commented on , a Linux distribution featuring shortcuts to Google web application on the desktop and preloaded on a PC sold (out) for $200 at Wal-Mart. Someone asked me to blog about it and I do find plenty interesting about it, so thus this post.

I started predicting that Linux would take over the desktop in 1994 and stopped predicting that a couple years later. The increasing dominance of web-based applications may have me making that prediction again in a couple more years, and gOS is a harbinger of that. Obviously web apps make users care less about what desktop operating system they’re running — the web browser is the desktop platform of interest, not the OS.

gOS also points to a new and better (safer) take on a PC industry business model — payment for placement of shortcuts to web applications on the desktop (as opposed to preloading a PC with crapware) — although as far as I know Google isn’t currently paying anything to the gOS developers or , which makes the aforementioned cheap PC.

This is highly analogous to the Mozilla business model with a significant difference: distribution is controlled largely by hardware distributors, not the Mozilla/Firefox site, and one would expect end distributors to be the ones in a position to make deals with web application companies. However, this difference could become muted if lots of hardware vendors started shipping Firefox. This model will make the relationship of hardware vendors to software development, and particularly open source, very interesting over the next years.

One irony (long recognized by many) is that while web applications pose a threat to user freedoms gained through desktop free and open source software, they’ve also greatly lowered the barriers to desktop adoption.

By the way, the most interesting recent development in web application technology: Caja, or Capability Javascript.

The pragmatic case for open services

Tuesday, August 14th, 2007

I’ve been meaning to comment again (see constitutionally open services from last July) on free services as in free software, discussion of which has picked up noticably in the past few months (see Luis Villa’s evaluating a free/open service definition rough draft and comments on that post for one entry into that discussion).

I may get to it eventually, but a big part of my commentary would be on the pragmatic “open source” argument for open services, which I think has hardly been made in that context. Matthew Gertner makes the case in Facebook and the Case for Open Source:

From my perspective, the most interesting thing about the recent leak of Facebook source code is what a non-event it was. As such it’s one of the strongest arguments for open source that I’ve seen in a while.

Gertner goes on to explain how of the possible downsides from the leak could be seen as benefits. Of course open source isn’t magic and a source code leak isn’t going to help any more than a source code dump with no process. But for at least some sites willing to invest in that process, there is almost all upside to opening the code that runs the site.

Community is the new IP

Tuesday, October 10th, 2006

I’ve been wanting to blog that phrase since reading the Communities as the new IPR? thread on the Free Software Business list. That thread lost coherence and died quickly but I think the most important idea is hinted at by Susan Wu:

There are two elements of discussion here – a singular community, which is a unique entity; and the community constructs (procedure, policy, infrastructure, governance), which are more readily replicated.

Not said but obvious: a singular community is not easily copied.

Now Tim Lee writes about GooTube (emphasis added):

YouTube is an innovative company that secured several millions of dollars in venture capital and used it to create a billion-dollar company in less than a year. Yet as far as I know, strong IP rights have not been an important part of YouTube’s strategy. They don’t appear to have received any patents, and their software interface has been widely copied. Indeed, Google has been in the video-download business longer than YouTube, and their engineers could easily have replicated any YouTube functionality they felt was superior to Google’s own product.

Like all businesses, most of the value in technology startups lies in strong relationships among people, not from technology, as such. Technological change renders new technologies obsolete very quickly. But a brilliant team of engineers, visionary management, and a loyal base of users are assets that will pay dividends for years to come. That’s why Google was willing to pay a billion bucks for YouTube.

Loyal base of users does not do justice to the YouTube community. I was not aware of YouTube’s social features nor how critical they are until I read the NYT story on electric guitar performances of Pachelbel’s Canon being posted to YouTube (I commented on the story at the Creative Commons weblog). Some of these videos have been rated by tens of thousands of users and commented on by thousands. “Video responses” are a means for YouTube users to have a conversation solely through posting videos.

Google Video could have duplicated these social features trivially. I’m surprised but not stunned that Google thinks the YouTube community is worth in excess of $1.65 billion.

On a much smaller scale the acquisition of Wikitravel and World66 earlier this year is an example of the value of hard to duplicate communities. The entire contents of these sites could be legally duplicated for commercial use, yet Internet Brands paid (unfortunately an undisclosed amount) to acquire them, presumably because copies on new sites with zero community would be worthless.

There’s lots more to say about community as a business strategy for less obvious cases than websites, but I don’t have the ability, time, and links to say it right now. The FSB thread above hints at this in the context of software development communities.

And of course community participants may want to consider what allowances they require from a community owner, e.g., open licenses, data, and formats so that at a minimum a participant can retrieve and republish elsewhere her contributions if the owner does a bad job.

Wordcamp and wiki mania

Monday, August 7th, 2006

In lieu of attending maybe the hottest conference ever I did a bit of wiki twiddling this weekend. I submitted a tiny patch (well that was almost two weeks ago — time flies), upgraded a private MediaWiki installation from 1.2.4 to 1.6.8 and a public installation from 1.5.6 to 1.6.8 and worked on a small private extension, adding to some documentation before running into a problem.

1.2.4->1.6.8 was tedious (basically four successive major version upgrades) but trouble-free, as that installation has almost no customization. The 1.5.6->1.6.8 upgrade, although only a single upgrade, took a little fiddling make a custom skin and permissions account for small changes in MediaWiki code (example). I’m not complaining — clean upgrades are hard and the MediaWiki developers have done a great job of making them relatively painless.

Saturday I attended part of , a one day unconference for WordPress users. Up until the day before the tentative schedule looked pretty interesting but it seems lots of lusers signed up so the final schedule didn’t have much meat for developers. Matt Mullenweg’s “State of the Word” and Q&A hit on clean upgrade of highly customized sites from several angles. Some ideas include better and better documented plugin and skin APIs with more metadata and less coupling (e.g., widgets should help many common cases that previously required throwing junk in templates).

Beyond the purely practical, ease of customization and upgrade is important for openness.

Now listening to the Wikimania Wikipedia and the Semantic Web panel…