Post Creative Commons

Scientology of sharing

Tuesday, October 17th, 2006

Last month I watched , a scientology docudrama, after hearing about it on Boing Boing. It is a pretty well done and low key film, considering the nuttiness of scientology.

Copyright is one of the weapons scientology uses to hide the hilarious absurdity of its beliefs, so it is no surprise that The Bridge has has been taken down (at least some of the copies) from YouTube, Google, and the Internet Archive.

I remember that it was published to the Archive under a Creative Commons Attribution-NonCommercial-NoDerivs license. Sadly http://www.archive.org/details/BrettHanoverTheBridge is not in the Wayback Machine nor WebCite, so I can’t demonstrate this. If I am correct, the filmmaker has no cause to stop non-commercial distribution, as CC licenses are irrevocable.

If you can’t find the film on the lightnet fire up a filesharing client (I recommend ) and click on the below to start your P2P search and download.

Scientology-The_Bridge.mp4

Play the web

Saturday, October 14th, 2006

I finally tried out the (I noticed that it is now available for Linux and that the developers were throwing a party, which I attended). The killer feature is web integration. Browse (Songbird is built on the same as Firefox) to a page that links to music or video files or a podcast feed, Songbird displays all available media and allows you to play, subscribe, or add to your media library immediately.

It feels as if there’s no distinction between files on your computer and those on the web. In fact the only gripe I have is that once a file is added to your library from the web, there’s no facility for getting back to the web page you obtained the file from.

Check out the , which does a good job of demonstrating Songbird’s web features (Songbird is also a good all-around media player).


Screenshot of Songbird 0.2rc3/Linux browsing ccMixter.

Nathan, I see a for Songbird in the future. :)

Community is the new IP

Tuesday, October 10th, 2006

I’ve been wanting to blog that phrase since reading the Communities as the new IPR? thread on the Free Software Business list. That thread lost coherence and died quickly but I think the most important idea is hinted at by Susan Wu:

There are two elements of discussion here – a singular community, which is a unique entity; and the community constructs (procedure, policy, infrastructure, governance), which are more readily replicated.

Not said but obvious: a singular community is not easily copied.

Now Tim Lee writes about GooTube (emphasis added):

YouTube is an innovative company that secured several millions of dollars in venture capital and used it to create a billion-dollar company in less than a year. Yet as far as I know, strong IP rights have not been an important part of YouTube’s strategy. They don’t appear to have received any patents, and their software interface has been widely copied. Indeed, Google has been in the video-download business longer than YouTube, and their engineers could easily have replicated any YouTube functionality they felt was superior to Google’s own product.

Like all businesses, most of the value in technology startups lies in strong relationships among people, not from technology, as such. Technological change renders new technologies obsolete very quickly. But a brilliant team of engineers, visionary management, and a loyal base of users are assets that will pay dividends for years to come. That’s why Google was willing to pay a billion bucks for YouTube.

Loyal base of users does not do justice to the YouTube community. I was not aware of YouTube’s social features nor how critical they are until I read the NYT story on electric guitar performances of Pachelbel’s Canon being posted to YouTube (I commented on the story at the Creative Commons weblog). Some of these videos have been rated by tens of thousands of users and commented on by thousands. “Video responses” are a means for YouTube users to have a conversation solely through posting videos.

Google Video could have duplicated these social features trivially. I’m surprised but not stunned that Google thinks the YouTube community is worth in excess of $1.65 billion.

On a much smaller scale the acquisition of Wikitravel and World66 earlier this year is an example of the value of hard to duplicate communities. The entire contents of these sites could be legally duplicated for commercial use, yet Internet Brands paid (unfortunately an undisclosed amount) to acquire them, presumably because copies on new sites with zero community would be worthless.

There’s lots more to say about community as a business strategy for less obvious cases than websites, but I don’t have the ability, time, and links to say it right now. The FSB thread above hints at this in the context of software development communities.

And of course community participants may want to consider what allowances they require from a community owner, e.g., open licenses, data, and formats so that at a minimum a participant can retrieve and republish elsewhere her contributions if the owner does a bad job.

Day against DRM

Tuesday, October 3rd, 2006

Today is ‘s “day against DRM.” Defectivebydesign.org says:

There is no more important cause for electronic freedoms and privacy than the call for action to stop DRM from crippling our digital future. The time is now.

The first bit is hyperbole, but you could cover fighting DRM and several related causes by taking the opportunity to join the EFF or FSF.

I don’t have anything new to say about DRM, so see Digital Rent-a-Center Management from June.

Download DRM-free music.

Friends don’t let friends click spam

Thursday, September 7th, 2006

Doc Searls unfortunately decided the other day that offering his blog under a relatively restrictive Creative Commons NonCommercial license instead of placing its contents in the public domain is chemo for splogs (spam blogs). I doubt that, strongly. Spam bloggers don’t care about copyright. They’ll take “all rights reserved” material, that which only limits commercial use, and stuff in the public domain equally. Often they combine tiny snippets from many sources, probably triggering copyright for none of them.

A couple examples found while looking at people who had mentioned Searls’ post: all rights reserved material splogged, commenter here says “My blog has been licensed with the CC BY-NC-SA 2.5 for a while now, and sploggers repost my content all the time.” A couple anecdotes prove nothing, but I’d be surprised to find that sploggers are, for example, using CC-enabled search to find content they can legally re-splog. I hope someone tries to figure out what characteristics make blog content more likely to be used in splogs and whether licensing is one of them. I’d get some satisfaction from either answer.

Though Searls’ license change was motived by a desire “to come up with new forms of treatment. Ones that don’t just come from Google and Yahoo. Ones that come from us” I do think blog spam is primarily the search engines’ problem to solve. Search results that don’t contain splogs are more valuable to searchers than spam-ridden results. Sites that cannot be found through search effectively don’t exist. That’s almost all there is to it.

Google in particular may have mixed incentives (they want people to click on their syndicated ads wherever the ads appear), but others don’t (Technorati, Microsoft, Ask, etc. — Yahoo! wishes it had Google’s mixed incentives). At least once where spam content seriously impacted the quality of search results Google seems to have solved the problem — at some point in the last year or so I stopped seeing Wikipedia content reposted with ads (an entirely legal practice) in Google search results.

What can people outside the search engines do to fight blog and other spam? Don’t click on it. It seems crazy, but clickfraud aside, real live idiots clicking on and even buying stuff via spam is what keeps spammers in business. Your uncle is probably buying pills from a spammer right now. Educate him.

On a broader scale, why isn’t the , or the blogger equivalent, running an educational campaign teaching people to avoid spam and malware? Some public figure should throw in “dag gammit, don’t click on spam” along with “don’t do drugs.” Ministers too.

Finally, if spam is so easy for (aware) humans to detect (I certainly have a second sense about it), why isn’t human-augmented computation being leveraged? Opportunities abound…

Google whenever

Sunday, September 3rd, 2006

For years I’ve heard speculation that Google is buiding a web archive. Now there are domain name purchases to fuel the speculation. The Internet Archive has been providing an invaluable service with the and has set up mirrors in multiple jurisdictions, but recording the web is too important to rely on any single organization, no matter how good or robust. So I hope Google and others are maintaining web archives and will make them available to the public.

Via Tim Finin, who also notes an interesting paper about using article and user history to assign trust levels to Wikipedia article fragments and a Semantic Web archive.

Archives are important for establishing provenance in many situations, though one I’m particularly interested in is citing that a particular work was offered under a Creative Commons license at a particular time. This and other uses (e.g., citation in general, which is often of the form “http://example.com accessed 2005-03-10”, though who knows if a copy of the content as it existed on that date exists) would be enhanced if on-demand archiving were available. The Internet Archive does offer Archive-It.org, but this service is for institutional use and uses periodic crawls rather than immediate archiving of individual pages.

Update, 2 minutes later: I should read a bit more before posting: does exactly what I want. However, I hate that it uses opaque identifiers, and as such is nearly as evil as TinyURL.

LinuxWorld San Francisco

Monday, August 21st, 2006

Brief thoughts on last week’s Conference and Expo San Francisco.

Lawrence Lessig’s opening keynote pleased the crowd and me. A few points fof interest:

  • Free speech is a strong aspect of free culture and at least implicitly pushed for a liberal interpretation of fair use, saying that the ability to understand, reintepret and remake video and other multimedia is “the new literacy” and important to the flourishing of democracy.
  • The “read/write Internet”, if allowed to flourish, is a much bigger market than the “read only Internet.”
  • Support free standards and free software for media, including Ogg and .
  • In 1995 only crazies thought it possible to build a viable free software operating system (exaggeration from this writer’s perspective), now only crazies think wireless can solve the last mile competition problem. Go build free wireless networks and prove the telcos and pro-regulation lawyers (including the speaker) wrong.
  • One of the silly video mashups Lessig played was Jesus Will Survive, featuring an adult Jesus in diapers hit by a bus. A few people left the auditorium at this point.

I’ve at least visited the exhibition space of almost every LWCE SF (the first one, actually in San Jose, was the most fun — Linus was a rock star and revolution was in the air) seemed bigger and more diverse, with most vendors pushing business “solutions” as opposed to hardware.

By far the most interesting exhibition booth to me was Cleversafe, an open source dispersed storage project that announced a Linux filesystem interface at the conference and was written up in today’s New York Times and Slashdot. I’ve been waiting for something like this for a long time, particularly since Allmydata is not open source and does not support Linux.

Also, Creative Commons won a silly “Best Open Source Solution” show award.

Addendum 20080422: If you’re arriving from an unhinged RedState blog post, see Lessig’s response.

Wordcamp and wiki mania

Monday, August 7th, 2006

In lieu of attending maybe the hottest conference ever I did a bit of wiki twiddling this weekend. I submitted a tiny patch (well that was almost two weeks ago — time flies), upgraded a private MediaWiki installation from 1.2.4 to 1.6.8 and a public installation from 1.5.6 to 1.6.8 and worked on a small private extension, adding to some documentation before running into a problem.

1.2.4->1.6.8 was tedious (basically four successive major version upgrades) but trouble-free, as that installation has almost no customization. The 1.5.6->1.6.8 upgrade, although only a single upgrade, took a little fiddling make a custom skin and permissions account for small changes in MediaWiki code (example). I’m not complaining — clean upgrades are hard and the MediaWiki developers have done a great job of making them relatively painless.

Saturday I attended part of , a one day unconference for WordPress users. Up until the day before the tentative schedule looked pretty interesting but it seems lots of lusers signed up so the final schedule didn’t have much meat for developers. Matt Mullenweg’s “State of the Word” and Q&A hit on clean upgrade of highly customized sites from several angles. Some ideas include better and better documented plugin and skin APIs with more metadata and less coupling (e.g., widgets should help many common cases that previously required throwing junk in templates).

Beyond the purely practical, ease of customization and upgrade is important for openness.

Now listening to the Wikimania Wikipedia and the Semantic Web panel…

Pig assembler

Friday, July 21st, 2006

The story of The Pig and the Box touches on many near and dear themes:

  • The children’s fable is about DRM and digital copying, without mentioning either.
  • The author is raising money through Fundable, pledging to release the work under a more liberal license if $2000 is raised.
  • The author was dissuaded from using the sampling licnese (a very narrow peeve of mine, please ignore).
  • I don’t know if the author intended, but anyone inclined to science fiction or nanotech will see a cartoon .
  • The last page of the story is Hansonian.

Read it.

This was dugg and Boing Boing’d though I’m slow and only noticed on Crosbie Fitch‘s low-volume blog. None of the many commentators noted the sf/nano/upload angle as far as I can tell.

Constitutionally open services

Thursday, July 6th, 2006

Luis Villa provokes, in a good way:

Someone who I respect a lot told me at GUADEC ‘open source is doomed’. He believed that the small-ish apps we tend to do pretty well will migrate to the web, increasing the capital costs of delivering good software and giving next-gen proprietary companies like Google even greater advantages than current-gen proprietary companies like MS.

Furthermore:

Seeing so many of us using proprietary software for some of our most treasured possessions (our pictures, in flickr) has bugged me deeply this week.

These things have long bugged me, too.

I think Villa has even understated the advantage of web applications — no mention of security — and overstated the advantage of desktop applications, which amounts to low latency, high bandwidth data transfer — let’s see, , including video editing, is the hottest thing on the web. Low quality video, but still. The two things client applications still excel at are very high bandwidth, very low latency data input and output, such as rendering web pages as pixels. :)

There are many things that can be done to make client development and deployment easier, more secure, more web-like and client applications more collaboration-enabled. Fortunately they’ve all been tried before (e.g., , , , others of varying relevance), so there’s much to learn from, yet the field is wide open. Somehow it seems I’d be remiss to not mention , so there it is. Web applications on the client are also a possibility, though typical only address ease of development and not manageability at all.

The ascendancy of web applications does not make the desktop unimportant any more than GUIs made filesystems unimportant. Another layer has been added to the stack, but I am still very happy to see any move of lower layers in the direction of freedom.

My ideal application would be available locally and over the network (usually that means on the web), but I’ll prefer the latter if I have to choose, and I can’t think of many applications that don’t require this choice (fortunately is one of them, or close enough).

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.