Post Computers

Free software needs P2P

Friday, July 28th, 2006

Luis Villa on my constitutionally open services post:

It needs a catchier name, but his thinking is dead on- we almost definitely need a server/service-oriented list of freedoms which complement and extend the traditional FSF Four Freedoms and help us think more clearly about what services are and aren’t good to use.

I wasn’t attempting to invent a name, but Villa is right about my aim — I decided to not mention the four freedoms because I felt my thinking too muddled to dignified with such a mention.

Kragen Sitaker doesn’t bother with catchy names in his just posted draft essay The equivalent of free software for online services. I highly recommend reading the entire essay, which is as incisive as it is historically informed, but I’ve pulled out the problem:

So far, all this echoes the “open standards” and “open formats” discussion from the days when we had to take proprietary software for granted. In those days, we spent enormous amounts of effort trying to make sure our software kept our data in well-documented formats that were supported by other programs, and choosing proprietary software that conformed to well-documented interfaces (POSIX, SQL, SMTP, whatever) rather than the proprietary software that worked best for our purposes.

Ultimately, it was a losing game, because of the inherent conflict of interest between software author and software user.

And the solution:

I think there is only one solution: build these services as decentralized free-software peer-to-peer applications, pieces of which run on the computers of each user. As long as there’s a single point of failure in the system somewhere outside your control, its owner is in a position to deny service to you; such systems are not trustworthy in the way that free software is.

This is what has excited about decentralized systems long before P2P filesharing.

Luis Villa also briefly mentioned P2P in relation to the services platforms of Amazon, eBay, Google, Microsoft and Yahoo!:

What is free software’s answer to that? Obviously the ’spend billions on centralized servers’ approach won’t work for us; we likely need something P2P and/or semantic-web based.

Wes Felter commented on the control of pointers to data:

I care not just about my data, but the names (URLs) by which my data is known. The only URLs that I control are those that live under a domain name that I control (for some loose value of control as defined by ICANN).

I hesitated to include this point because I hesitate to recommend that most people host services under a domain name they control. What is the half-life of http://blog.john.smith.name vs. http://johnsmith.blogspot.com or js@john.smith.name vs. johnsmith@gmail.com? Wouldn’t it suck to be John Smith if everything in his life pointed at john.smith.name and the domain was hijacked? I think Wes and I discussed exactly this outside CodeCon earlier this year. Certainly it is preferable for a service to allow hosting under one’s own domain (as Blogger and several others do), but I wish I felt a little more certain of the long-term survivability of my own [domain] names.

This post could be titled “freedom needs P2P” but for the heck of it I wanted to mirror “free culture needs free software.”

Constitutionally open services

Thursday, July 6th, 2006

Luis Villa provokes, in a good way:

Someone who I respect a lot told me at GUADEC ‘open source is doomed’. He believed that the small-ish apps we tend to do pretty well will migrate to the web, increasing the capital costs of delivering good software and giving next-gen proprietary companies like Google even greater advantages than current-gen proprietary companies like MS.

Furthermore:

Seeing so many of us using proprietary software for some of our most treasured possessions (our pictures, in flickr) has bugged me deeply this week.

These things have long bugged me, too.

I think Villa has even understated the advantage of web applications — no mention of security — and overstated the advantage of desktop applications, which amounts to low latency, high bandwidth data transfer — let’s see, , including video editing, is the hottest thing on the web. Low quality video, but still. The two things client applications still excel at are very high bandwidth, very low latency data input and output, such as rendering web pages as pixels. :)

There are many things that can be done to make client development and deployment easier, more secure, more web-like and client applications more collaboration-enabled. Fortunately they’ve all been tried before (e.g., , , , others of varying relevance), so there’s much to learn from, yet the field is wide open. Somehow it seems I’d be remiss to not mention , so there it is. Web applications on the client are also a possibility, though typical only address ease of development and not manageability at all.

The ascendancy of web applications does not make the desktop unimportant any more than GUIs made filesystems unimportant. Another layer has been added to the stack, but I am still very happy to see any move of lower layers in the direction of freedom.

My ideal application would be available locally and over the network (usually that means on the web), but I’ll prefer the latter if I have to choose, and I can’t think of many applications that don’t require this choice (fortunately is one of them, or close enough).

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

Apple for dummies

Thursday, June 15th, 2006

Apple’s penetration of the geek market over the last five years or so has bugged me … for that long. It has been far longer than that since I’ve read a comp.*.advocacy threadflamewar, so stumbling upon Mark Pilgrim’s post on dumping Apple and its heated responses made me feel good and nostalgic.

Tim Bray (who does not b.s.) answers Time to Switch? affirmatively.

I hope this is the visible beginning of a trend and that in a few years most people who ought to know better will have replaced laptops sporting an annoying glowing corporate logo with ones sporting Ubuntu stickers.

May S-events

Tuesday, May 9th, 2006

This month’s Creative Commons Salon San Francisco is tomorrow and a short walk from my new abode.

Saturday is the Singularity Summit at Stanford. I’ve seen 12 of the 14 speakers previously but it could still be a fun event. Probably not as fun as the similar Hofstadter symposium six years ago.

Sunday I’m on a panel at the “Sustainable World Symposium & Festival” on “Leveraging the Internet–Maximizing Our Collective Power.” I’ll seek to entertain and educate, given the probable granola audience.

May 25 I hope to attend the Future Salon on The Sustainability of Material Progress with who has a rather different (and correct) take “sustainability” than I suspect the the “Sustainable World” people above. I haven’t attended a Future Salon in a year, maybe two. I hear they’re large events now.

Update 20060517: May 30 I’ll be speaking at Netsquared Conference session on Turning Communications Technologies Into Tools For Free Speech And Free Culture.

Post May 10 CC Salon SF followup.

Google Brin Creator

Thursday, February 23rd, 2006

Now that Google has a product () named* for cofounder and current President of Products it clearly needs a technology named for cofounder and current President of Technology .

“Brin” doesn’t have an obvious meaning so perhaps the technology could be something more compelling than . How about a Basic Reality Interface Neuroplant?

I’ll take two Google Brins for staters — one to replace each eye — better portals to see the portal, including its (soon to be) millions of crappy Google Pages.

* Not really.

Search 2006

Saturday, January 14th, 2006

I’m not going to make new predictions for search this year — it’s already underway, and my predictions for 2005 mostly did not come true. I predict that most of them will, in the fullness of time:

Metadata-enhanced search. Yahoo! and Google opened Creative Commons windows on their web indices. Interest in semantic markup (e.g., microformats) increased greatly, but search that really takes advantage of this is a future item. (NB I consider the services enabled by more akin to browse than search and as far as I know they don’t allow combinging tag and keyword queries.)

Proliferation of niche web scale search engines. Other than a few blog search services, which are very important, I don’t know of anything that could be called “web scale” — and I don’t know if blog search could really be called niche. One place to watch is public search engines using Nutch. Mozdex is attempting to scale up, but I don’t know that they really have a niche, unless “using open source software” is one. Another place is Wikipedia’s list of internet search engines.

On the other hand, weblications (as Web 2.0) did take off.

I said lots of desktop search innovation was a near certainty, but if so, it wasn’t very visible. I predicted slow progress on making multimedia work with the web, and I guess there was very slow progress. If there was forward progress on usable security it was slow indeed. Open source did slog toward world domination (e.g., Firefox is the exciting platform for web development, but barely made a dent in Internet Explorer’s market share) with Apple’s success perhaps being a speed bump. Most things did get cheaper and more efficient, with the visible focus of the semiconductor industry swinging strongly in that direction (they knew about it before 2005).

Last year I riffed on John Battelle’s predictions. He has a new round for 2006, one of which was worth noting at Creative Commons.

Speaking of predictions, of course Google began using prediction markets internally. Yahoo!s Tech Buzz Game has some markets relevant to search but I don’t know how to interpret the game’s prices.

CodeCon 2006 Program

Thursday, January 12th, 2006

The 2006 program has been announced and it looks fantastic. I highly recommend attending if you’re near San Francisco Feb 10-12 and any sort of computer geek. There’s an unofficial CodeCon wiki.

My impressions of last year’s CodeCon: Friday, Saturday, and Sunday.

Via Wes Felter

Machine learning patterns

Sunday, November 27th, 2005

I first heard of the Silicon Valley Patterns meetings from Alex Chafee a few years ago while participating in his “bootstrap” practice group. SVP sounded like fun, but I only got around to attending a meeting this spring, a one-off on led by Johannes Ernst (notes). I was going to write something about that meeting, but just can’t get worked up about digital identity.

SVP’s next extended track was on , a topic I have some interest in and very cursory knowledge of from reading popular books on AI. The track lasted from May through October. Mostly our study was guided by Andrew Moore’s statistical data mining tutorials, with occasional reference to Russell & Norvig.

I don’t think any of the regular attendees were machine learning experts, but with occasional contributions from everyone, I think everyone was able to increase their knowledge of the material. Overall a gratifying method of learning, though not a perfect substitute for lecture.

My secondary take way from the track was that I need a serious brush up on calculus and statistics, neither of which I’ve studied, and barely used, in fifteen years. I’m working on that.

The current SVP track should be very different–hands on Ruby on Rails practice. I’m attempting to justify putting in the time…

WUXGA LCD stretch

Monday, November 21st, 2005

I’ve been needing a notebook refresh for awhile and was planning to get a HP dv1000 (1280×768 display, ~5.2 pounds, under $1000, good Linux compatibility, and Nathan seemed to like his similar model).

Then I realized that I could get a laptop with a 1920×1200 () display. I had to have one. I missed the 1600×1200 21″ CRT I used for years and there’s reasonable sounding research that more screen is an easy productivity boost.

I bought a Dell Inspiron 6000 (my first choice was a Dell Latitude D810, for its , but I couldn’t justify a several hundred dollar premimum for an otherwise similarly equipped machine).

A number of people told me that 1920×1200 on a 15 inch widescreen would be impossible to read. Not true at all. Some people also told me that a nearly 7 pound laptop would be a major drag. So far it hasn’t been. Apart from a tiny Inspiron 2100 I used temporarily for several months this one is about the weight I’m accustomed to (and I walk or bicycle 5 to 15 miles on days I don’t telecommute–I vastly prefer this to “working out”).

I think the large monitor productivity study is right. I feel more productive than I have since giving up my desktop and 21″ CRT. If you spend most of the day doing “knowledge work” in front of a computer, especially programming, get yourself a super high resolution display pronto.

I encountered a couple of oddities regarding the WUXGA display after installing Ubuntu Linux on the new machine.

First, Ubuntu’s installer correctly detected the 1920×1200 display and Intel 915 (GMA900) graphics. The generated /etc/X11/xorg.conf only had modelines for 1920×1200. However, the driver was unaware of the 915’s support for 1920×1200, so ran at 1600×1200. I’m surprised it ran at all, given that xorg.conf contained no configuration for that resolution.

The other odd thing is that the entire screen was used to display 1600×1200 pixels–everything was stretched horizontally by 20 percent. I would’ve strongly expected 1600×1200 running on a 1920×1200 LCD screen to not use the screen’s full width–320 horizontal pixels should’ve been unused. Every description of screens that I’ve (very casually) read says something about each (discrete) pixel being controlled by an individual transistor. There’s no tweaking display size or orienting the display with an LCD like there is with a CRT. My uneducated guess is that X was using or some similar method to stretch 1600 virtual pixels onto 1920 real pixels. [Update 20051122: As Brian suggests in a comment below the stretching is done by hardware and controlled by BIOS settings–“LCD Panel Expansion” on the Inspiron 6000, enabled by default.]

The problem was fixed by running 915resolution, following this example:

  • Download 915resolution
  • make install (or just copy the binary provided)
  • Create /etc/init.d/rc.local with a single line:

    /usr/sbin/915resolution 49 1920 1200

  • sudo chmod +x /etc/init.d/rc.local
  • sudo update-rc.d rc.local start 80 S .

After rebooting X ran beautifully at 1920×1200.

Ubuntu Linux

Saturday, September 10th, 2005

5.10 is due shortly, so some very uncareful observations on 5.04 (version numbers are date-based, releases come every six months) before they become super stale:

Network installation from Windows was almost trivial, though InstallUbuntu.exe would be welcome. The only non-trivial part was partition resizing. I’m completely comfortable setting up partitions (e.g., with fdisk), but based only on installer feedback, I was not certain it would attempt to resize a Windows partition, so I backed out and resized before installing.

I was very happy to find that the display, sound, ethernet, wifi, and hibernate (suspend-to-disk) all worked with no manual configuration, a minor miracle based on past experience. However, this is on a three year old computer (Dell Inspiron 2100). (Sleep/suspend-to-memory didn’t work under Windows 2000 and I haven’t tried fixing it under either OS.)

The most annoying thing about Ubuntu Linux is having to semi-manually install proprietary code for Flash and various media codecs. However, ubuntuguide.org provides exact steps (usually only a few) for installing any of these. Overall I consider this an improvement over the multimedia situation on Windows, where Windows Media Player gives uniformaitve messages about missing codecs and one is often reduced to downloading codec installers from completely untrusted websites. (The most annoying thing about installing an OS, including Windows, is usually getting all of the hardware recognized and working, so I’m happy that proprietary codecs were the biggest annoyance, but here’s to open formats anyway.)

The only other real annoyance is that I don’t like the 2.2.1.1 mail/groupware client as much as I hoped (I used it as my primary mail client around 2001-3 and missed it), perhaps because I didn’t use it with IMAP previously. Evolution has no mechanism for switching to offline mode immediately,and occasionally can take many minutes to go offline. Furthermore, Evolution often gets confused when going back online, (perhaps) particularly after awakening from hibernation or switching networks, requiring closing the program, which can take several minutes in its confused state, and relaunch. Thunderbird allows one to go offline without syncing folders and never gets confused when going back online. I may switch back to Thunderbird, though I’d miss Evolution’s vFolders and calendar support.

I’m really looking forward to Ubuntu Linux 5.10, though the real test will be installation on a newer laptop.