Post Open Source

CC6+

Wednesday, December 17th, 2008

December 16 marked six years since the release of the first Creative Commons licenses. Most of the celebrations around the world have already taken place or are going on right now, though San Francisco’s is on December 18. (For CC history before 2002-12-16, see video of a panel recorded a few days ago featuring two of CC’s founding board members and first executive director or read the book Viral Spiral, available early next year, though my favorite is this email.)

I’ve worked for CC since April, 2003, though as I say in the header of this blog, I don’t represent any organization here. However, I will use this space to ask for your support of my and others’ work at CC. We’re nearing the end of our fourth annual fall public fundraising campaign and about halfway to our goal of raising US$500,000. We really need your support — past campaigns have closed out with large corporate contributions, though one has to be less optimistic about those given the financial meltdown and widespread cutbacks. Over the longer term we need to steadily decrease reliance on large grants from visionary foundations, which still contribute the majority of our funding.

Sadly I have nothing to satisfy a futarchist donor, but take my sticking around as a small indicator that investing in Creative Commons is a highly leveraged way to create a good future. A few concrete examples follow.

became a W3C Recommendation on October 14, the culmination of a 4+ year effort to integrate the Semantic Web and the Web that everyone uses. There were several important contributors, but I’m certain that it would have taken much longer (possibly never) or produced a much less useful result without CC’s leadership (our motivation was first to describe CC-licensed works on the web, but we’re also now using RDFa as infrastructure for building decoupled web applications and as part of a strategy to make all scientific research available and queryable as a giant database). For a pop version (barely mentioning any specific technology) of why making the web semantic is significant, watch Kevin Kelly on the next 5,000 days of the web.

Wikipedia seems to be on a path to migrating to using the CC BY-SA license, clearing up a major legal interoperability problem resulting from Wikipedia starting before CC launched, when there was no really appropriate license for the project. The GNU FDL, which is now Wikipedia’s (and most other Wikimedia Foundation Projects’) primary license, and CC BY-SA are both copyleft licenses (altered works must be published under the same copyleft license, except when not restricted by copyright), and incompatible widely used copyleft licenses are kryptonite to the efficacy of copyleft. If this migration happens, it will increase the impact of Wikipedia, Creative Commons, free culture, and the larger movement for free-as-in-freedom on the world and on each other, all for the good. While this has basically been a six year effort on the part of CC, FSF, and the Wikimedia Foundation, there’s a good chance that without CC, a worse (fragmented, at least) copyleft landscape for creative works would result. Perhaps not so coincidentally, I like to point out that since CC launched, there has been negative in the creative works space, the opposite of the case in the software world.

Retroactive copyright extension cripples the public domain, but there are relatively unexplored options for increasing the effective size of the public domain — instruments to increase certainty and findability of works in the public domain, to enable works not in the public domain to be effectively as close as possible, and to keep facts in the public domain. CC is pursuing all three projects, worldwide. I don’t think any other organization is placed to tackle all of these thorny problems comprehensively. The public domain is not only tremendously important for culture and science, but the only aesthetically pleasing concept in the realm of intellectual protectionism (because it isn’t) — sorry, copyleft and other public licensing concepts are just necessary hacks. (I already said I’m giving my opinion here, right?)

CC is doing much more, but the above are a few examples where it is fairly easy to see its delta. CC’s Science Commons and ccLearn divisions provide several more.

I would see CC as a wild success if all it ever accomplished was to provide a counterexample to be used by those who fight against efforts to cripple digital technologies in the interest of protecting ice delivery jobs, because such crippling harms science and education (against these massive drivers of human improvement, it’s hard to care about marginal cultural production at all), but I think we’re on the way to accomplishing much more, which is rather amazing.

More abstractly, I think the role of creating “commons” (what CC does and free/open source software are examples) in nudging the future in a good direction (both discouraging bad outcomes and encouraging good ones) is horribly underappreciated. There are a bunch of angles to explore this from, a few of which I’ve sketched.

While CC has some pretty compelling and visible accomplishments, my guess is that most of the direct benefits of its projects (legal, technical, and otherwise) may be thought of in terms of lowering transaction costs. My guess is those benefits are huge, but almost never perceived. So it would be smart and good to engage in a visible transaction — contribute to CC’s annual fundraising campaign.

October and beyond

Thursday, October 9th, 2008

Friday (tomorrow) I’m attending the first Seasteading conference in Burlingame. I blogged about seasteading four years ago. Although the originators of the seastead idea are politically motivated, I’d assign a very low probability to them becoming significantly more politically impactful than some of their inspirations (e.g., micronations and offshore pirate radio, i.e., very marginal). To begin with, the seasteading concept has huge engineering and business hurdles to clear before it could make any impact whatsoever. If the efforts of would be seasteaders lead to the creation of lots more wealth (or even just a new weird culture), any marginal political impact is just gravy. In other words, seasteading is another example of political desires sublimated into useful creation. That’s a very good thing, and I expect the conference to be interesting and fun.

Saturday I’ll be at the Students for Free Culture Conference in Berkeley. You don’t have to be a student to attend. Free culture is a somewhat amorphous concept, but I think an important one. I suspect debates about what free culture means and how to develop and exploit it will be evident at the conference. Some of those are in part about the extent to which political desires should be sublimated into useful creation (I should expand on that in a future post).

October 20-26 I’ll participate in three free culture related conferences back to back.

First in Amsterdam for 3rd COMMUNIA Workshop (Marking the public domain: relinquishment & certification), where I’ll be helping talk about some of Creative Commons’ (I work for, do not represent here, etc.) public domain and related initiatives.

Second in Stockholm for the Nordic Cultural Commons Conference, where I’ll give a talk free culture and the future of cultural production.

Finally in Gothenburg for FSCONS, where I’ll give an updated version of a talk on where free culture stands relative to free software.

In December at MIT, Creative Commons will hold its second technology summit. Nathan Yergler and colleagues have been making the semantic rubber hit the web road pretty hard lately, and will have lots to show. If you’re doing interesting [S|s]emantic Web or open content related development (even better, both), take a look at the CFP.

More than likely I’ll identicate rather than blog all of these.

25 years of GNU

Tuesday, September 2nd, 2008

The turns 25 on September 27. Not much to add beyond what I wrote on the Creative Commons blog. Watch the Freedom Fry video.

I do have some meta commentary…

The video, featuring British humorist , is very British. That is, Americans might wonder if there is any humor in it at all. I’m fine with that.

It’s great that the video is posted in Ogg Theora format and works seamlessly in my browser via Cortado, and download links are provided. However, HTML to copy & paste for direct inclusion in a blog post or other web page should also be provided, as is typical for sharing video. I haven’t tried making such yet, though I should and might.

Finally, there’s a hidden jab at some in the free software movement in my CC blog post:

One of the movements and projects directly inspired by GNU is Creative Commons. We’re still learning from the free software movement. On a practical level, all servers run by Creative Commons are powered by GNU/Linux and all of the software we develop is free software.

So please join us in wishing the GNU project a happy 25th birthday by spreading a happy birthday video from comedian Stephen Fry. The video, Freedom Fry, is released under a CC Attribution-NoDerivatives license.

Emphasis added. The free culture/open content world lags the free software/open source world in many respects, one of those being an understanding of what freedoms are necessary. Some from the free software world have pushed Creative Commons to recognize that in many cases culture requires freedoms equivalent to those expected for free software/open source (that’s the first bolded link above), while some in the free software world (not necessarily the exact same people, but at least people associated with the same organizations) publish documents and videos under terms that do not grant those same freedoms (that’s the second bolded link above).

The Free Software Foundation has probably published documents under terms roughly equivalent to CC BY-ND probably before CC existed. Currently the footer of fsf.org says:

Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.

Does the FSF really want to reserve the right to use copyright to censor people who might publish derived versions of their texts? They probably are concerned that someone will alter their message so as to be misleading. Perhaps there was some rationale for this pre-web and pre-CC, but now there is not:

  • People can easily see canonical versions by going to fsf.org. (DNS also should obsolete much of trademark as well, but that’s for another post.)
  • CC licenses that permit derivatives include the following (see 3(b), 4(a), 4(b), and 4(c) for the actual language):
    • Licensor can specify a link to provide for attribution
    • Derivative works must state how they are altered
    • Licensor can demand that credit be removed from the derivative
    • Unfortunately, in some jurisdictions licensor could press “moral rights” to censor a derivative considered derogatory

So one can pre-clear the right to make adaptations and retain some legal mechanisms to club creators of adaptations (ordered from best practice to distasteful, according to me).

The Software Freedom Law Center does worse, publishing its website (also, see the SFLC post on 25 years of GNU) under CC BY-NC-ND. Do they really want to prohibit commercial use? SFLC (a super excellent organization, as is the FSF!) is dedicated to software freedom, but still it seems silly for them to publish non-software works under terms antithetical to the spirit of free software.

On a brighter note, the FSF is publishing promotional images for Freedom Fry under a free as in free software as applied to cultural works license (CC BY-SA), one of which has already been taken under those terms for use on Stephen Fry’s Wikipedia article. Ah, the power of free cultural works. :)

Do wish GNU a happy 25th birtday — watch and spread the video!

Google Chrome Comix PDF

Monday, September 1st, 2008

looks really interesting. Given that the web is the interesting platform, more web client innovation is welcome, especially in open source web clients (but let’s not forget the servers).

The way Google apparently has announced the project is also interesting. As of this writing www.google.com/chrome is not live, but printed comics drawn by Scott McCloud describing the project have been mailed to journalists.

Philipp Lenssen scanned the comic book and posted it as a series of 38 images, each with its own page. Google had the foresight to give permission for this in advance by releasing it under a Creative Commons Attribution-NonCommercial-NoDerivatives license, noted on the comic book’s back cover.

This is one rare case in which I find reading a PDF easier than web pages (page down has lower latency and requires less movement than clicking ‘next page’) so using sam2p and pdftk I made a PDF version of the comic book.

Note that although Creative Commons licenses containing the ‘No Derivatives’ term do not allow altering the license work, they do allow moving the otherwise unaltered work to a new format. (Ideally Google would have released the work under a more permissive license, but we’ll take what we can get.) Lenssen’s scanning and my PDFing are examples of such format shifting.

HOWTO deploy and upgrade WordPress or any web application

Sunday, August 31st, 2008

Recently Nathan Yergler posted what ought to be the preferred way to install and upgrade WordPress:

First, install WordPress from a Subversion checkout; do:

$ svn co http://svn.automattic.com/wordpress/tags/2.6/

instead of downloading the .zip or .tar.gz file. Configure as directed.

Then, when a new version is available, log into your webhost and run:

$ svn switch http://svn.automattic.com/wordpress/tags/2.6.1/

from your install directory.

I’ve been doing this for ages and consider installing from and overwriting with tarballs on an ongoing basis just short of insanity. Unfortunately the WordPress Subversion Access page says it is for developers only and doesn’t describe using svn switch to upgrade — indeed, what they describe (which will always obtain the very latest, usually unreleased, code checked in by WordPress developers), really is only appropriate for WordPress developers and testers. The MediaWiki site does a much better job but still doesn’t push revision control as the preferred deployment mechanism.

WordPress and MediaWiki were pioneers several years ago in making web application deployment and even upgrade painless relative to what came before (mostly by automating as much database configuration and schema migration as possible), but it may take a new generation to make deployment from revision control systems (preferably distributed) the norm. WikiTrust sets a good example [Update 20090622: Though not a good example of cool URIs, code instructions moved to new location without forwarding.]:

There are two ways of getting WikiTrust: via Git (recommended), or via tarballs (easier at first, but harder to keep up-to-date).

[…]

Our preferred way to distribute the code is via Git. Git enables you to easily get the latest version of the code, including any bug-fixes. Git also makes it easy for you to contribute to the project (see Contributing to WikiTrust for more information).

As I’ve mentioned several times in passing, such practices will facilitate open web applications and other network services.

Free (and gratis) software vs. 25,000 cops

Sunday, July 20th, 2008

I’ve mentioned before that free software and its ilk decreases opportunity for taxation and regulation. Tim Lee wrote on the same topic a couple months ago. So I’m slightly pleased to see the argument endorsed by the Business Software Alliance, as told by Russell McOrmond (emphasis added to all quotes below):

The claims in the recent press release included the following:

Software piracy also has ripple effects in local communities.  The lost revenues to the wider group of software distributors and service providers ($11.4 billion) would have been enough to hire 54,000 high tech industry workers, while the lost state and local tax revenues ($1.7 billion) would have been enough to build 100 middle schools or 10,800 affordable housing units, or hire nearly 25,000 experienced police officers.

Of course the BSA’s concern for tax revenues is disingenuous, in a totally unsurprising fashion:

I guess any money not paid to BSA members just disappears and is not spent on other things in the economy that also involve jobs and taxes. In the real world we know that money not spent on software will more likely be spent on other things which are taxed the same — or even higher, given how BSA likes to also lobby to get software taxed at a lower rate than other products or services.

McOrmond also makes a slightly surprising claim about the BSA’s studies that I’d love to have verification of:

I know that people choosing legally lower cost software such as FLOSS are included as “piracy” in these studies. I guess my supporting FLOSS (both commercially and as an individual) could be blamed for their not being enough money to adequately equip the Canadian military in Afghanistan. I guess this makes me a terrorist sympathizer, by the BSA “logic”.

Regardless of whether FLOSS is counted as “piracy” in studies, the logic that it doesn’t directly facilitate the collection of taxes to fund military (or state schools, housing, or police) is pretty unassailable. Of course it could reduce costs and increase quality for each of these functions, as for anyone else.

Us Autonomo!

Monday, July 14th, 2008

Autonomo.us and the Franklin Street Statement on Freedom and Network Services launched today.

I’ve written about the subject of this group and statement a number of times on this blog, starting with Constitutionally Open Services two years ago. I think that post holds up pretty well. Here were my tentative recommendations:

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

I was honored to participate in a summit called by the Free Software Foundation to discuss these issues March of this year, along with far greater thinkers and doers. Autonomo.us and the Franklin Street Statement (named for the FSF’s office address) are the result of continued work among the summit participants, not yet endorsed by the FSF (nor by any other organization). Essentially everything I conjectured above made it into the statement (not due to me, they are fairly obvious points, at least as of 2008, and others made them long before) with the exception of making deployment easier, which is mundane, and service governance issues, which the group did discuss, but inconclusively.

There’s much more to say about this, but for now (and likely for some time, at the rate I write, though this activity did directly inspire me to propose speaking at an upcoming P2P industry summit, which I will early next month–I’m also speaking tomorrow at BALUG and will mention autonomo.us briefly–see info on both engagements) I wanted to address two immediate and fairly obvious critiques.

Brian Rowe wrote:

“Where it is possible, they should use Free Software equivalents that run on their own computer.” This is near Luddite talk… It is almost always possible to use an app on your own comp, but it is so inefficient. Networked online apps are not inherently evil, should you back up your work
offline, yes. Should you have alternative options and data portability, yes. You should fight to impove them. But you should not avoid them like the plauge.

The statement doesn’t advocate avoiding network services–see “Where it is possible”, and most of the statement concerns how network services can be free. However, it is easy to read the sentence Rowe quoted and see Luddism. I hope that to some it instead serves as a challenge, for:

  • Applications that run on your own computer can be networked, i.e., P2P.
  • Your own computer does not only include your laptop and home server, but any hardware you control, and I think that should often include virtual hardware.

Wes Felter wrote:

I see a lot about software licensing and not much about identity and privacy. I guess when all you have is the AGPL everything looks like a licensing problem.

True enough, but lots of people are working on identity and privacy. If the FSF doesn’t work on addressing the threats to freedom as in free software posed by network services, it isn’t clear who would. And I’d suggest that any success free software has in the network services world will have beneficial effects on identity and privacy for users–unless you think these are best served by identity silos and security through obscurity.

Finally, the FSF is an explicitly ideological organization (I believe mostly for the greater good), so the statement (although not yet endorsed by the FSF, I believe all participants are probably FSF members, staff, or directors) language reflect that. However, I suspect by far the most important work to be done to maintain software freedom is technical and pragmatic, for example writing P2P applications, making sharing modified source of network applications a natural part of deployment (greatly eased by the rise of distributed version control), and convincing users and service providers that it is in their interest to expect and provide free/open network services.

I suggest going on to read Evan Prodromou (the doer above) on autonomo.us and the Franklin Street Statement and Rufus Pollock on the Open Software Service Definition, which more or less says the same thing as the FSS in the language of a definition (and using the word open), coordinated to launch at the same time.

Control yourself, follow Evan

Wednesday, July 2nd, 2008

See Evan Prodromou’s post on launching identi.ca, good background reading on open services.

I love the name of Prodromou’s company, Control Yourself. Presumably it is a reference to discussions of user autonomy as a better frame than freedom or openness … for discussions of concerns addressed by free/open source software and its ilk.

You can follow Evan’s microblogging at identi.ca/evan.

I’ve only used Twitter for an ongoing joke that probably nobody gets, but for now I’ll be trying to honestly microblog at identi.ca/mlinksva.

Table selection, HSA, LugRadio, Music, Photographers, New Media

Monday, April 21st, 2008

A few observations and things learned from the last eight days.

Go to a page with a table, for example this one (sorry, semi-nsfw). Hold down the control key and select cells. How could I not have known about this!? Unfortunately, copy & paste seems to produce tab separated values in a single row even when pasting from mutliple rows in the HTML table (tried with Firefox and Epiphany). Still really useful when you only want to copy one column of a table, but if you want all of the columns, don’t hold down the control key and row boundaries get newlines as they should rather than tabs. (Thanks Asheesh.)

I feel really stupid about this one. I’ve assumed that a (US) was a spend within the year or lose your contributions arrangement, but that’s what a Flexible Spending Account is (I have no predictable medical expenses, so such an account makes no sense for me). A HSA is an investment account much like an IRA, except you can spend from it without penalty upon incurring medical expenses rather than old age. You can only contribute to a HSA while enrolled in a high deductible health insurance plan, which I’ll try to switch to next year. (Thanks Ahrash.)

I saw a few presentations at LugRadio Live USA, in addition to giving one. Miguel de Icaza’s on (content roughly corresponding to this post) and Ian Murdock’s on were both in part about software packaging. Taken together, they make a good case for open source facilitating cross polination of ideas and code across operating system platforms.

Aaron Bockover and Gabriel Burt did a presentation/demo on , showing off some cool track selection/playlist features and talking about more coming. I may have to try switching back to Banshee as my main audio player (from Rhythmbox, with occasional use of Songbird for web-heavy listening or checking on how the program is coming along). Banshee runs on Mono, and both are funded by Novell, which also (though I don’t know how their overall investment compares) has an .

John Buckman gave an entertaining talk on open source and open content (including the slide at right). My talk probably was not entertaining at all, but used the question ‘how far behind [free/open source software] is free/open culture?’ to string together selected observations about the field.

Benjamin Mako Hill did a presentation on Revealing Errors (meant both ways). I found myself wanting to be skeptical of the power of technical errors to expose political/power relationships, but I imagine the concept could use a little hype — there’s definitely something there. The talk made me more sensitive to errors in any case. For example, when I transferred funds from a money market account to checking to pay taxes, an email notice included this (emphasis in original):

Your confirmation number is 0.

Zero? Really? The transaction did go through.

Tuesday I attended the Media Web Meetup V: The Gulf Between NorCal and SoCal, is it so big?, the idea being (in this context pushed by Songbird founder Rob Lord; I presented at the first Media Web Meetup and have attended a few others) that in Northern California entrepreneurs are trying to build new services around music, nearly all stymied by protectionist copyright holders in Southern California. I really did not need to listen to yet another panel asking how the heck is the music recording distribution industry going to use technology to make money, but this was a pretty good one as those go. One of the panelists kept urging technologists to “fix [music] metadata” as if doing so were the key to enabling profit from digital music. I suppressed the urge to sound a skeptical note, as investing more in metadata is one of the least harmful things the industry might do. Not that I don’t think metadata is great or anything.

Thursday evening I was on a ‘Copyright 2.0’ panel put on by the American Society of Media Photographers Northern California. I thought my photo selection for my first slide was pretty clever. No, copyright expansion is not always good for the interests of professional photographers. The other panelists and the audience were actually more open minded (both meanings) than I expected, and certainly realistic. The photographer on the panel even stated the obvious (my paraphrase from memory): new technology has allowed lots of people to explore their photographry talents who would otherwise have been unable to, and maybe some professional photographers just aren’t that good and should find other work. My main takeway from the panel is that it is very difficult for an independent photographer to successfully pursue unauthorized users in court. With the sometime exception of one, the other panelists all strongly advised photographers to avoid going to court except as a last resort, and even then, first doing a rational calculation of what the effort is likely to cost and gain. The best advice was probably to try to turn unauthorized users into clients.

Friday evening I went to San Jose to be on a panel about New Media Artists and the Law. Unlike Thursday’s panel, this one was mostly about how to use and re-use rather than how to prevent use. This (and some nostalgia) made me miss living in Silicon Valley — I lived in Sunnyvale two years (2003-2005) and San Jose (2005-2006) before moving back to San Francisco. Nothing really new came up, but I did enjoy the enthusiasm of the other panelists and the audience (as I did the previous day).

Staturday I went to Ubuntu Restaurant in Napa, which apparently does vegetable cuisine but does not market itself as vegetarian. I think that’s a good idea. The food was pretty good.

I’ve been listening to Hazard Records 59 and 60: Calida Construccio by various and Unhazardous Songs by Xmarx. Lovely Hell (mp3) from the latter is rather poppy.

Red Hat’s awesome desktop Linux work

Thursday, April 17th, 2008

Red Hat on What’s Going On With Red Hat Desktop Systems? An Update (emphasis added):

we have no plans to create a traditional desktop product for the consumer market in the foreseeable future

Somehow Slashdot reads this as Red Hat Avoids Desktop Linux, Says Too Tough.

Obviously not true, as the Red Hat post goes on to say they have an enterprise desktop product, a community supported desktrop distrubution, and an upcoming desktop product for emerging markets.

More importantly:

Other desktop related projects where Red Hat has been the primary developer, or a major contributor, include:

  • X Revitalization effort (kernel modesetting, randr, dri2)
  • Screen size control panel
  • PolicyKit & ConsoleKit
  • Gnome (screensaver, gvfs/gio, GtkPrint, etc)
  • Liberation Fonts (with sponsorship of the Harfbuzz font shaper project)
  • Theora encoder improvements
  • Sponsorship of Ogg Ghost (successor to Ogg Vorbis)
  • NetworkManager and Network driver work – developed by Red Hat
  • OpenOffice.org 64-bit port
  • OpenOffice.org integration into the rest of GNOME: Port to cairo, dictionary unification, print/file dialogs
  • PulseAudio
  • Bluetooth file sharing
  • Ongoing hal maintenance and revitalization
  • DBus and DBus activation
  • Multiple power management activities:
    • Tickless kernel
    • Gnome power manager and the quirks list
    • Suspend/resume enhancements
    • Laptop backlight intensity autocontrol
    • www.lesswatts.org project support (such as Powertop)
    • CPUfreq
    • AMD PowerNow!
  • and of course, lots and lots of bugfixes!

Although I think 2001-2002 is the only time I’ve primarily used a Red Hat desktop (before I used Slackware then Debian, since I’ve used Mandrake then Ubuntu), I’m certain that many of the things that make using a free software desktop (any distribution) so nice today have been built by engineers at . Thanks!