Post Open Source

Community is the new IP

Tuesday, October 10th, 2006

I’ve been wanting to blog that phrase since reading the Communities as the new IPR? thread on the Free Software Business list. That thread lost coherence and died quickly but I think the most important idea is hinted at by Susan Wu:

There are two elements of discussion here – a singular community, which is a unique entity; and the community constructs (procedure, policy, infrastructure, governance), which are more readily replicated.

Not said but obvious: a singular community is not easily copied.

Now Tim Lee writes about GooTube (emphasis added):

YouTube is an innovative company that secured several millions of dollars in venture capital and used it to create a billion-dollar company in less than a year. Yet as far as I know, strong IP rights have not been an important part of YouTube’s strategy. They don’t appear to have received any patents, and their software interface has been widely copied. Indeed, Google has been in the video-download business longer than YouTube, and their engineers could easily have replicated any YouTube functionality they felt was superior to Google’s own product.

Like all businesses, most of the value in technology startups lies in strong relationships among people, not from technology, as such. Technological change renders new technologies obsolete very quickly. But a brilliant team of engineers, visionary management, and a loyal base of users are assets that will pay dividends for years to come. That’s why Google was willing to pay a billion bucks for YouTube.

Loyal base of users does not do justice to the YouTube community. I was not aware of YouTube’s social features nor how critical they are until I read the NYT story on electric guitar performances of Pachelbel’s Canon being posted to YouTube (I commented on the story at the Creative Commons weblog). Some of these videos have been rated by tens of thousands of users and commented on by thousands. “Video responses” are a means for YouTube users to have a conversation solely through posting videos.

Google Video could have duplicated these social features trivially. I’m surprised but not stunned that Google thinks the YouTube community is worth in excess of $1.65 billion.

On a much smaller scale the acquisition of Wikitravel and World66 earlier this year is an example of the value of hard to duplicate communities. The entire contents of these sites could be legally duplicated for commercial use, yet Internet Brands paid (unfortunately an undisclosed amount) to acquire them, presumably because copies on new sites with zero community would be worthless.

There’s lots more to say about community as a business strategy for less obvious cases than websites, but I don’t have the ability, time, and links to say it right now. The FSB thread above hints at this in the context of software development communities.

And of course community participants may want to consider what allowances they require from a community owner, e.g., open licenses, data, and formats so that at a minimum a participant can retrieve and republish elsewhere her contributions if the owner does a bad job.

Free software needs hardware entrepreneurs

Saturday, September 23rd, 2006

Luis Villa:

I’m boggled that Fedora, OpenSuse, and Ubuntu, all of whom have open or semi-open build systems now, are not actively seeking out Emperor and companies like Emperor, and helping them ship distros that are as close to upstream- and hence most supportable- for everyone. Obviously it is in RH, Canonical, and Novell’s interests to actively pursue Big Enterprise Fish like HP and Dell. But I’m really surprised that the communities around these distros haven’t sought out the smaller, and potentially growing, companies that are offering computers with Linux pre-installed.

Sounds exactly right to me. I’ve been thinking something similar for awhile, but as the post title suggests, focused on hardware vendors. Tons of them compete to sell Linux servers at the very low and very high ends and everything inbetween, but if you want a pre-installed Linux laptop you need to pay a hefty premium for slightly out of date hardware from someone like Emperor Linux. It seems like there’s an opportunity for a hardware vendor to sell a line of Linux laptops that aren’t merely repurposed Windows machines. It has seemed like this for a something like a decade though, and as far as I know HP and a couple others have only tentatively and temporarily offered a few lame configurations.

So I’d like to see a hardware startup (or division of an existing company) sell a line of laptops designed for Linux, where everything “just works” just as it does on Macs, and for the same reasons — limited set of hardware to support, work on the software until it “just works” on that hardware. There’s probably even some opportunity for Apple-like proprietary control over some aspects of the hardware. Which reminds me, what legal barriers, if any, would someone who wants to manufacture the OLPC design face? There is discussion of a commercial subsidiary for the project:

The idea is that a commercial subsidiary could manufacture and sell a variation of the OLPC in the developed world. These units would be marked up so that there would be a significant profit which can be plowed into providing more units in countries who cannot afford the full cost of one million machines.

The discussions around this have talked about a retail price of 3× the cost price of the units.

In any case Villa is right, distributions should be jumping to support hardware vendors, both the mundane and innovative sorts. Which Red Hat/Fedora is doing in the case of OLPC.

Update 20060926: In comments below Villa points out system76, which approaches what I want, excpet that their prices are mediocre and they don’t offer high resolution displays, which I will never do without again. David points out olpcnews.com, which looks like reasonable independent reporting on OLPC. I asked on the OLPC wiki about other manufacturers’ use of the design.

LinuxWorld San Francisco

Monday, August 21st, 2006

Brief thoughts on last week’s Conference and Expo San Francisco.

Lawrence Lessig’s opening keynote pleased the crowd and me. A few points fof interest:

  • Free speech is a strong aspect of free culture and at least implicitly pushed for a liberal interpretation of fair use, saying that the ability to understand, reintepret and remake video and other multimedia is “the new literacy” and important to the flourishing of democracy.
  • The “read/write Internet”, if allowed to flourish, is a much bigger market than the “read only Internet.”
  • Support free standards and free software for media, including Ogg and .
  • In 1995 only crazies thought it possible to build a viable free software operating system (exaggeration from this writer’s perspective), now only crazies think wireless can solve the last mile competition problem. Go build free wireless networks and prove the telcos and pro-regulation lawyers (including the speaker) wrong.
  • One of the silly video mashups Lessig played was Jesus Will Survive, featuring an adult Jesus in diapers hit by a bus. A few people left the auditorium at this point.

I’ve at least visited the exhibition space of almost every LWCE SF (the first one, actually in San Jose, was the most fun — Linus was a rock star and revolution was in the air) seemed bigger and more diverse, with most vendors pushing business “solutions” as opposed to hardware.

By far the most interesting exhibition booth to me was Cleversafe, an open source dispersed storage project that announced a Linux filesystem interface at the conference and was written up in today’s New York Times and Slashdot. I’ve been waiting for something like this for a long time, particularly since Allmydata is not open source and does not support Linux.

Also, Creative Commons won a silly “Best Open Source Solution” show award.

Addendum 20080422: If you’re arriving from an unhinged RedState blog post, see Lessig’s response.

No ultimate outcomes

Tuesday, August 8th, 2006

Tim Lee responds to my AOLternative history. I agree with the gist of almost everything he says with a few quibbles, for example:

Likely, something akin to a robots.txt file would have been invented that would provide electronic evidence of permission to link, and it would have been bundled by default into Apache. Sure, some commercial web sites would have refused to allow linking, but that would have simply lowered their profile within the web community, the same way the NYT’s columnists have become less prominent post-paywall.

In a fairly bad scenario it doesn’t matter what Apache does, as the web is a backwater, or Apache never happens. And in a fairly bad scenario lower profile in the web community hardly matters — all the exciting stuff would be behind AOL and similar subscription network walls. But I agree that workarounds and an eventually thriving web would probably have occurred. Perhaps lawyers did not really notice search engines and linking until after the web had already reached critical mass. Clearly they’re trying to avoid making that mistake again.

Lee’s closing:

So I stand by the words “relentless” and “inevitable” to describe the triumph of open over closed systems. I’ll add the concession that the process sometimes takes a while (and obviously, this makes my claim non-falsifiable, since I can always say it hasn’t happened yet), but I think legal restrictions just slow down the growth of open platforms, they don’t change the ultimate outcome.

Slowing down progress is pretty important, in a bad way. Furthermore, I’d make a wild guess that the future is highly dependent on initial conditions, no outcomes are inevitable by a long shot, and there is no such thing as an ultimate outcome, only a new set of initial conditions.

That’s my peeve for the day.

Grandiose example: did Communism just delay the relentless march of Russian society toward freedom and wealth?

Wordcamp and wiki mania

Monday, August 7th, 2006

In lieu of attending maybe the hottest conference ever I did a bit of wiki twiddling this weekend. I submitted a tiny patch (well that was almost two weeks ago — time flies), upgraded a private MediaWiki installation from 1.2.4 to 1.6.8 and a public installation from 1.5.6 to 1.6.8 and worked on a small private extension, adding to some documentation before running into a problem.

1.2.4->1.6.8 was tedious (basically four successive major version upgrades) but trouble-free, as that installation has almost no customization. The 1.5.6->1.6.8 upgrade, although only a single upgrade, took a little fiddling make a custom skin and permissions account for small changes in MediaWiki code (example). I’m not complaining — clean upgrades are hard and the MediaWiki developers have done a great job of making them relatively painless.

Saturday I attended part of , a one day unconference for WordPress users. Up until the day before the tentative schedule looked pretty interesting but it seems lots of lusers signed up so the final schedule didn’t have much meat for developers. Matt Mullenweg’s “State of the Word” and Q&A hit on clean upgrade of highly customized sites from several angles. Some ideas include better and better documented plugin and skin APIs with more metadata and less coupling (e.g., widgets should help many common cases that previously required throwing junk in templates).

Beyond the purely practical, ease of customization and upgrade is important for openness.

Now listening to the Wikimania Wikipedia and the Semantic Web panel…

Open Data

Sunday, July 30th, 2006

Tim Bray has a very nice summary of open data:

I think any online service can call itself “Open” if it makes, and lives up to, this commitment: Any data that you give us, we’ll let you take away again, without withholding anything, or encoding it in a proprietary format, or claiming any intellectual-property rights whatsoever.

For extra credit, a service could also say: We acknowledge your interest in any value-added information we distill from what you give us, and will share it back with you to the extent we can do so while preserving the privacy of others.

So, do we need some sort of Open Service analogue of the Open Source Definition? It couldn’t hurt.

I don’t know if this goes far enough for “open services” — certainly not far enough for the service equivalent of free software. However, it might be nice if “open” meant something substantially different than “free” or “libre” for services, c.f. open source software and free software.

Tim also says:

I suspect that if we can get the basic idea across, then we’re in old-fashioned consumer-advocacy territory; and I suspect that it will only take a small number of painful experiences for consumers to understand the issue at a pretty deep level.

I have noticed, just in the past six months I think, lots of people with no obvious predisposition (e.g., proprietary software background or just regular users) suddenly “getting” the importance of open formats. Promising and pleasantly surprising.

Free software needs P2P

Friday, July 28th, 2006

Luis Villa on my constitutionally open services post:

It needs a catchier name, but his thinking is dead on- we almost definitely need a server/service-oriented list of freedoms which complement and extend the traditional FSF Four Freedoms and help us think more clearly about what services are and aren’t good to use.

I wasn’t attempting to invent a name, but Villa is right about my aim — I decided to not mention the four freedoms because I felt my thinking too muddled to dignified with such a mention.

Kragen Sitaker doesn’t bother with catchy names in his just posted draft essay The equivalent of free software for online services. I highly recommend reading the entire essay, which is as incisive as it is historically informed, but I’ve pulled out the problem:

So far, all this echoes the “open standards” and “open formats” discussion from the days when we had to take proprietary software for granted. In those days, we spent enormous amounts of effort trying to make sure our software kept our data in well-documented formats that were supported by other programs, and choosing proprietary software that conformed to well-documented interfaces (POSIX, SQL, SMTP, whatever) rather than the proprietary software that worked best for our purposes.

Ultimately, it was a losing game, because of the inherent conflict of interest between software author and software user.

And the solution:

I think there is only one solution: build these services as decentralized free-software peer-to-peer applications, pieces of which run on the computers of each user. As long as there’s a single point of failure in the system somewhere outside your control, its owner is in a position to deny service to you; such systems are not trustworthy in the way that free software is.

This is what has excited about decentralized systems long before P2P filesharing.

Luis Villa also briefly mentioned P2P in relation to the services platforms of Amazon, eBay, Google, Microsoft and Yahoo!:

What is free software’s answer to that? Obviously the ’spend billions on centralized servers’ approach won’t work for us; we likely need something P2P and/or semantic-web based.

Wes Felter commented on the control of pointers to data:

I care not just about my data, but the names (URLs) by which my data is known. The only URLs that I control are those that live under a domain name that I control (for some loose value of control as defined by ICANN).

I hesitated to include this point because I hesitate to recommend that most people host services under a domain name they control. What is the half-life of http://blog.john.smith.name vs. http://johnsmith.blogspot.com or js@john.smith.name vs. johnsmith@gmail.com? Wouldn’t it suck to be John Smith if everything in his life pointed at john.smith.name and the domain was hijacked? I think Wes and I discussed exactly this outside CodeCon earlier this year. Certainly it is preferable for a service to allow hosting under one’s own domain (as Blogger and several others do), but I wish I felt a little more certain of the long-term survivability of my own [domain] names.

This post could be titled “freedom needs P2P” but for the heck of it I wanted to mirror “free culture needs free software.”

Constitutionally open services

Thursday, July 6th, 2006

Luis Villa provokes, in a good way:

Someone who I respect a lot told me at GUADEC ‘open source is doomed’. He believed that the small-ish apps we tend to do pretty well will migrate to the web, increasing the capital costs of delivering good software and giving next-gen proprietary companies like Google even greater advantages than current-gen proprietary companies like MS.

Furthermore:

Seeing so many of us using proprietary software for some of our most treasured possessions (our pictures, in flickr) has bugged me deeply this week.

These things have long bugged me, too.

I think Villa has even understated the advantage of web applications — no mention of security — and overstated the advantage of desktop applications, which amounts to low latency, high bandwidth data transfer — let’s see, , including video editing, is the hottest thing on the web. Low quality video, but still. The two things client applications still excel at are very high bandwidth, very low latency data input and output, such as rendering web pages as pixels. :)

There are many things that can be done to make client development and deployment easier, more secure, more web-like and client applications more collaboration-enabled. Fortunately they’ve all been tried before (e.g., , , , others of varying relevance), so there’s much to learn from, yet the field is wide open. Somehow it seems I’d be remiss to not mention , so there it is. Web applications on the client are also a possibility, though typical only address ease of development and not manageability at all.

The ascendancy of web applications does not make the desktop unimportant any more than GUIs made filesystems unimportant. Another layer has been added to the stack, but I am still very happy to see any move of lower layers in the direction of freedom.

My ideal application would be available locally and over the network (usually that means on the web), but I’ll prefer the latter if I have to choose, and I can’t think of many applications that don’t require this choice (fortunately is one of them, or close enough).

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

Freedom Lunches

Monday, June 19th, 2006

Another excellent post from Tim Lee (two of many, just subscribe to TLF):

The oft-repeated (especially by libertarians) view that there’s no such thing as a free lunch is actually nonsense. Civilization abounds in free lunches. Social cooperation produces immense surpluses that have allowed us to become as wealthy as we are. Craigslist is just an extreme example of this phenomenon, because it allows social cooperation on a much greater scale at radically reduced cost. Craigslist creates an enormous amount of surplus value (that is, the benefits to users vastly exceed the infrastructure costs of providing the service). For whatever reason, Craigslist itself has chosen to appropriate only a small portion of that value, leaving the vast majority to its users.

As a political slogan I think of as applying only to transfers though perhaps others apply it overbroadly. Regardless the free lunches of which Lee writes are vastly underappreciated.

The strategy has another advantage too: charging people money for things is expensive. A significant fraction of the cost of a classified ad is the labor required to sell the ads. Even if you could automate that process, it’s still relatively expensive to process a credit card transaction. The same is true of ads. Which means that not only is Craigslist letting its users keep more of the surplus, but its surplus is actually bigger, too!

Charging money also enables taxation and encourages regulation. Replacement of financial transaction mediated production with peer production is a libertarian (of any stripe — substitute exploitation for taxation and regulation if desired) dream come true.

Put another way, that which does not require money is hard to control. I see advocacy of free software, free culture and similar as flowing directly from my desire for free speech and freedom and individual autonomy in general.

In the long run, then, I think sites that pursue a Craigslist-like strategy will come to dominate their categories, because they simply undercut their competition. That sucks if you’re the competitor, but it’s great for the rest of us!

Amen, though Craigslist, Wikipedia and similar do far more than merely undercut their competition.

Apple for dummies

Thursday, June 15th, 2006

Apple’s penetration of the geek market over the last five years or so has bugged me … for that long. It has been far longer than that since I’ve read a comp.*.advocacy threadflamewar, so stumbling upon Mark Pilgrim’s post on dumping Apple and its heated responses made me feel good and nostalgic.

Tim Bray (who does not b.s.) answers Time to Switch? affirmatively.

I hope this is the visible beginning of a trend and that in a few years most people who ought to know better will have replaced laptops sporting an annoying glowing corporate logo with ones sporting Ubuntu stickers.