Post Programming

creativecommons.opportunities

Monday, March 19th, 2007

If working for a new project of a startup-like nonprofit in San Francisco involving [open] education, [copyright] law, and [semantic web] technology, perhaps you should look into applying for Executive Director of CC Learn. I could imagine an education, legal, or technology person with some expertise and much passion for the other two working out.

Student programmers, Creative Commons is participating in Google Summer of Code as a mentoring organization.

It is too late to apply for a summer technology or “free culture” internship, but keep CC in mind for next summer and (possibly) this fall.

Update 20070409: There are three open positions in addition to CC Learn ED above:

SXSW: JavaScript everywhere

Sunday, March 11th, 2007

The Future of JavaScript ran through almost all of the new features in JavaScript 1.7, all of which are nice for programmers but probably won’t be widely used on the public web for a long time (until use of browsers that don’t support JavaScript 1.7 is negligible).

However is being used lots of places now apart from web browsers and giving JS features programmers expect places it well to be the default glue and application language for the next decade on the web, desktop and the server. Where is server side JavaScript? from July 2005 continues to be one of the most viewed posts on this blog. Many people are thinking along these lines, including the first Q&A for this session.

SXSW: Why XSLT is Hello World

Saturday, March 10th, 2007

Arrived about half an hour into Why XSLT is sexy to see in on the projector. What the heck were they talking about for the previous half hour? Left.

I have long wondered about using XSLT as an (untrusted) code distribution mechanism (e.g., acquire and run XSLT as an alternative to invoking a web service), but I suppose performance and functionality constraints make it a really niche case.

GPL Java

Monday, November 13th, 2006

Sun announced today that it is releasing all of the critical pieces of the Java platform under the GPL. This is fantastic news, as a huge number of important and exciting projects are built on the Java platform and now they can be completely free as in free software. Read Tim Bray on the announcement and lots more blog commentary via Tailrank.

This should have happened years ago but as of yesterday it happened sooner than I expected. I set up a play money prediction market on Inkling (the first of two) asking whether Java would be open sourced by the end of this year. The price slowly declined from 60 in May to 20 in late October, then spiked to 70, with a last trade at 81.76 this morning.

I judged the contract at 100, but probably shouldn’t have — much of the code won’t be released until early next year. Oops. Good thing Inkling markets are play money and zero oversight, or Chris F. Masse would rightly castigate me.

Copyright turns us into technology idiots

Saturday, October 28th, 2006

Or do copyright enforcement technologies attract people who would be kooks anyway?

Obvious case in point: DRM.

Now this from Paul Hoffert, apparently associated with “Noank Media”, commenting on Rob Kaye’s blog:

The Noank counting system is unique. We count usage by ALL players. Players can be time-based, such as iTunes, Windows Media, open source, our own Noank player, or your own favorite. They can be Microsoft Word, Acrobat Reader, Photoshop, or any other application program. The Noank client reports consumption of all content within our catalog on Windows, Mac, Unix, or recent cell phone devices.

Rob’s response is too polite:

This is nothing but empty hand-waving, I’m sorry. If you were to hire me to implement this system, I would have to politely tell you that this is impossible. I could not code such a thing and I have over a decade of client application programming experience. Please do elaborate on how you’re going to do this. If you’ve solved this I assume that you’ve already filed for some patents, right? What are your patent application numbers? I’d like to look up these exciting details — this is got to be amazing stuff you’re working on!

To which Hoffert responds:

Our tracking system is operational now and we are scaling it for large numbers of users.

Uh huh.

Voluntary collective licensing may have a role to play but I’m afraid I’m going to have to completely write off “Noank Media” before they even have a website.

Copyright mania hass the side effect of reducing perpetual motion research, who knew?

Addendum 20061031: Lucas Gonze writes that collective licensing will never happen. I think I buy his argument:

Users and businesses are moving away from filesharing networks and to the web, where DMCA safe harbor allows many disputes to be resolved peacefully. User-created content has become a substantial part of the media ecosystem over the last few years, and it doesn’t need collective licensing to exist.

Update 20071126: Noank does have a website now and a how it works page that leaves out lots of details but is not implausible. When more details are available I hope to post a retraction. Hoffert’s language was just too easy to make fun of, and that urge turned me into a technology idiot!

Wordcamp and wiki mania

Monday, August 7th, 2006

In lieu of attending maybe the hottest conference ever I did a bit of wiki twiddling this weekend. I submitted a tiny patch (well that was almost two weeks ago — time flies), upgraded a private MediaWiki installation from 1.2.4 to 1.6.8 and a public installation from 1.5.6 to 1.6.8 and worked on a small private extension, adding to some documentation before running into a problem.

1.2.4->1.6.8 was tedious (basically four successive major version upgrades) but trouble-free, as that installation has almost no customization. The 1.5.6->1.6.8 upgrade, although only a single upgrade, took a little fiddling make a custom skin and permissions account for small changes in MediaWiki code (example). I’m not complaining — clean upgrades are hard and the MediaWiki developers have done a great job of making them relatively painless.

Saturday I attended part of , a one day unconference for WordPress users. Up until the day before the tentative schedule looked pretty interesting but it seems lots of lusers signed up so the final schedule didn’t have much meat for developers. Matt Mullenweg’s “State of the Word” and Q&A hit on clean upgrade of highly customized sites from several angles. Some ideas include better and better documented plugin and skin APIs with more metadata and less coupling (e.g., widgets should help many common cases that previously required throwing junk in templates).

Beyond the purely practical, ease of customization and upgrade is important for openness.

Now listening to the Wikimania Wikipedia and the Semantic Web panel…

Constitutionally open services

Thursday, July 6th, 2006

Luis Villa provokes, in a good way:

Someone who I respect a lot told me at GUADEC ‘open source is doomed’. He believed that the small-ish apps we tend to do pretty well will migrate to the web, increasing the capital costs of delivering good software and giving next-gen proprietary companies like Google even greater advantages than current-gen proprietary companies like MS.

Furthermore:

Seeing so many of us using proprietary software for some of our most treasured possessions (our pictures, in flickr) has bugged me deeply this week.

These things have long bugged me, too.

I think Villa has even understated the advantage of web applications — no mention of security — and overstated the advantage of desktop applications, which amounts to low latency, high bandwidth data transfer — let’s see, , including video editing, is the hottest thing on the web. Low quality video, but still. The two things client applications still excel at are very high bandwidth, very low latency data input and output, such as rendering web pages as pixels. :)

There are many things that can be done to make client development and deployment easier, more secure, more web-like and client applications more collaboration-enabled. Fortunately they’ve all been tried before (e.g., , , , others of varying relevance), so there’s much to learn from, yet the field is wide open. Somehow it seems I’d be remiss to not mention , so there it is. Web applications on the client are also a possibility, though typical only address ease of development and not manageability at all.

The ascendancy of web applications does not make the desktop unimportant any more than GUIs made filesystems unimportant. Another layer has been added to the stack, but I am still very happy to see any move of lower layers in the direction of freedom.

My ideal application would be available locally and over the network (usually that means on the web), but I’ll prefer the latter if I have to choose, and I can’t think of many applications that don’t require this choice (fortunately is one of them, or close enough).

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

Long tail of metadata

Monday, May 29th, 2006

Ben Adida notes that people are writing about RDFa, which is great, and envisioning conflict with microformats, which is not. As Ben says:

Microformats are useful for expressing a few, common, well-defined vocabularies. RDFa is useful for letting publishers mix and match any vocabularies they choose. Both are useful.

In other words RDFa is a technology.

Evan Prodromou thinks the future is bleak without cooperation. I like his proposed way forward (strikeout added for obvious reasons):

  1. RDFa gets acknowledged and embraced by microformats.org as the future of semantic-data-in-XHTML
  2. The RDFa group makes an effort to encompass existing microformats with a minimum of changes
  3. microformats.org leaders join in on the RDFa authorship process
  4. microformats.org becomes a focus for developing real-world RDFa vocabularies

I see little chance of points one and three occuring. However, I don’t see this as a particularly bad thing. Point three will occur, almost by default: the simplest and most widely deployed microformats (e.g., , and rellicense) are also valid RDFa — the predicate (e.g., tag, nofollow, license) appearing in the default namespace to a RDFa application. More complex microformats may be handled by hGRDDL, which is no big deal as a microformat-aware application needs to parse each microformat it cares about anyway. From an RDF perspective any well-crafted metadata is a plus (and the microformats group do very careful work) as RDF’s killer app is integrating heterogenous data sources.

From a microformats perspecitve RDFa might well be ignored. While transformation of any microformat to RDF is relatively straightforward, transformation of RDF (which is a model, not a format) to microformats is nonsensical (well, I suppose the endpoint of such a transformation could be , though I’m not sure what the point would be). Microformats, probably wisely, is not reinventing RDF (as many do, usually badly).

So why would RDFa be of interest to developers? In a word, laziness. There is no process to follow for developing an RDF vocabulary (ironic), you can freely reuse existing vocabularies and tools, not write your own parsers, and trust that really smart people are figuring out the hard stuff for you (I believe the formal background of the Semantic Web is a long-term win). Or you might just want to, as Ben says “express metadata about other documents (embedded images)” which is trivial for RDF as images have URIs.

Addendum 20060601: The “simplest” microformats mentioned above have a name: elemental microformats.

Wikiforms

Thursday, May 11th, 2006

Brad Templeton writes about overly structured forms, one of my top UI peeves. The inability to copy and paste an IP address into a form with four separate fields has annoyed me, oh, probably hundreds of times. Date widgets annoy me slightly less. Listen to Brad when designing your next form, on the web or off.

The opposite of overly structured forms would be a freeform editing widget populated with unconstrained fields blank or filled with example data, or even a completely empty editing widget with suggested structure documented next to the widget — a wiki editing form. This isn’t as strange as it seems — many forms are distributed as word processor or plain text documents that recipients are expected to fill in by editing directly and return.

I don’t think “wikiforms” are appropriate for many cases where structured forms are used, but it’s useful to think of opposites and I imagine their (and hybrids — think a “rich” wiki editor with autocompletion — I haven’t really, but I imagine this is deja vu for anyone who has used mainframe-style data entry applications) niche could increase.

Ironically the currently number one use of the term wiki forms denotes adding structured forms to wikis!

On a marginally related note the Semantic MediaWiki appears to be making good progress.

Lazyweb: guess source and taget languages for translation

Monday, April 24th, 2006

I use and Google Translate fairly often and am annoyed that both require me to specify both source (text to be translated) and destination languages. The former could be guessed at from the input text and the latter trivially obtained from browser settings (Google at least defaults to English destination at google.com and Spanish at google.es).

, failing AltaVista and Google fixing this, someone should write a script that does.

Comments at this article point to various language detection techniques.