Post Computers

Rolling bugfree‽

Sunday, December 4th, 2011

Since September 26 I’ve been exclusively using Firefox Nightly builds. I noticed an annoying bug a few days ago. It was gone the next day. It occurred to me that I hadn’t noticed any other bugs. For months prior, I had used Firefox Aurora (roughly alpha) and don’t recall any bugs.

Since October 15 I’ve been using Debian Testing on my main computer. No problems.

For years prior, I had been using Ubuntu, and upgrading shortly after they released an alpha of their next six-month release. Years ago, such upgrades would always break something. I upgraded an older computer to the just released Ubuntu 12.04 alpha. Nothing broke.

In recent memory, final releases desktop software would often crash. Now, there are as many “issues” as ever, but they seem to be desired enhancements, not bugs. The only buggy application I can recall running on my own computer in the last year is PiTiVi, but that is just immature.

Firefox and Debian (and the many applications packaged with Debian) probably aren’t unique. I hope most people have the relatively bug-free existence that I do.

Has desktop software actually gotten more stable over the last 5-10 years? Has anyone quantified this? If there’s anything to it, what are the causes? Implementation of continuous integration testing? Application stagnation (nothing left to do but fix bugs — doubt it!)? A mysterious Flynn Effect for software? Or perhaps I’m unadventurous or delusional?

Us Autonomo!

Monday, July 14th, 2008

Autonomo.us and the Franklin Street Statement on Freedom and Network Services launched today.

I’ve written about the subject of this group and statement a number of times on this blog, starting with Constitutionally Open Services two years ago. I think that post holds up pretty well. Here were my tentative recommendations:

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

I was honored to participate in a summit called by the Free Software Foundation to discuss these issues March of this year, along with far greater thinkers and doers. Autonomo.us and the Franklin Street Statement (named for the FSF’s office address) are the result of continued work among the summit participants, not yet endorsed by the FSF (nor by any other organization). Essentially everything I conjectured above made it into the statement (not due to me, they are fairly obvious points, at least as of 2008, and others made them long before) with the exception of making deployment easier, which is mundane, and service governance issues, which the group did discuss, but inconclusively.

There’s much more to say about this, but for now (and likely for some time, at the rate I write, though this activity did directly inspire me to propose speaking at an upcoming P2P industry summit, which I will early next month–I’m also speaking tomorrow at BALUG and will mention autonomo.us briefly–see info on both engagements) I wanted to address two immediate and fairly obvious critiques.

Brian Rowe wrote:

“Where it is possible, they should use Free Software equivalents that run on their own computer.” This is near Luddite talk… It is almost always possible to use an app on your own comp, but it is so inefficient. Networked online apps are not inherently evil, should you back up your work
offline, yes. Should you have alternative options and data portability, yes. You should fight to impove them. But you should not avoid them like the plauge.

The statement doesn’t advocate avoiding network services–see “Where it is possible”, and most of the statement concerns how network services can be free. However, it is easy to read the sentence Rowe quoted and see Luddism. I hope that to some it instead serves as a challenge, for:

  • Applications that run on your own computer can be networked, i.e., P2P.
  • Your own computer does not only include your laptop and home server, but any hardware you control, and I think that should often include virtual hardware.

Wes Felter wrote:

I see a lot about software licensing and not much about identity and privacy. I guess when all you have is the AGPL everything looks like a licensing problem.

True enough, but lots of people are working on identity and privacy. If the FSF doesn’t work on addressing the threats to freedom as in free software posed by network services, it isn’t clear who would. And I’d suggest that any success free software has in the network services world will have beneficial effects on identity and privacy for users–unless you think these are best served by identity silos and security through obscurity.

Finally, the FSF is an explicitly ideological organization (I believe mostly for the greater good), so the statement (although not yet endorsed by the FSF, I believe all participants are probably FSF members, staff, or directors) language reflect that. However, I suspect by far the most important work to be done to maintain software freedom is technical and pragmatic, for example writing P2P applications, making sharing modified source of network applications a natural part of deployment (greatly eased by the rise of distributed version control), and convincing users and service providers that it is in their interest to expect and provide free/open network services.

I suggest going on to read Evan Prodromou (the doer above) on autonomo.us and the Franklin Street Statement and Rufus Pollock on the Open Software Service Definition, which more or less says the same thing as the FSS in the language of a definition (and using the word open), coordinated to launch at the same time.

Commoditizing the cloud

Wednesday, April 9th, 2008

Doug Cutting on Cloud: commodity or proprietary?:

As we shift applications to the cloud, do we want our code to remain vendor-neutral? Or would we rather work in silos, where some folks build things to run in the Google cloud, some for the Amazon cloud, and others for the Microsoft cloud? Once an application becomes sufficiently complex, moving it from one cloud to another becomes difficult, placing folks at the mercy of their cloud provider.

I think most would prefer not to be locked-in, that cloud providers instead sold commodity services. But how can we ensure that?

If we develop standard, non-proprietary cloud APIs with open-source implementations, then cloud providers can deploy these and compete on price, availability, performance, etc., giving developers usable alternatives.

That’s exactly right. Cloud providers (selling virtualized cpu and storage) are analogous to hardware vendors. We’re in the pre-PC era, when a developer must write to a proprietary platform, and if one wants to switch vendors, one must port the application.

But such APIs won’t be developed by the cloud providers. They have every incentive to develop proprietary APIs in order to lock folks into their services. Good open-source implementations will only come about if the community makes them a priority and builds them.

I think this is a little too pessimistic. Early leaders may have plenty of incentive to create lockin, but commoditization is another viable business model, one that could even be driven by a heretofore leading proprietary vendor, e.g., the IBM PC, or Microsoft-Yahoo!

Of course the community should care and build the necessary infrastructure so that it is available to enable a potential large cloud provider to pursue the commoditization route and to provide an alternative so long as no such entity steps forward.

Cutting has been working on key parts of the necessary infrastructure; read the rest of his post for more.

End Software Patents

Sunday, March 2nd, 2008

I strongly prefer voluntary action. However, software patents are not amenable to workaround and so must be attacked directly through less savory legal, legislative, and electoral routes (though if software patents are toxic to free software, the opposite is also true, so simply creating and using free software is a voluntary if indirect attack on software patents).

Software patents are the major reason multimedia on the web (and on computers generally) is so messed up — few multimedia formats may be implemented without obtaining many patent licenses, and amazingly, this is sometimes impossible:

[The framework] is so patent-encumbered that today no one really knows who has “rights” to it. Indeed, right now, no new MPEG-4 licenses are even being issued.

As the End Software Patents site emphasizes, software patents negatively impact every sector now that everything uses software.

My only problem with the ESP site (and many others, this is just a general peeve of mine) is that it does not even link to similar resources with a non-U.S. jurisdiction focus. For example, the What Can I Do? page might state that if one is reading the page but not in the U.S. (because that never happens), please check out FFII (EU) and similar.

In any case, please join the effort of ESP and others to eradicate software patentsweapons of mass destruction. Ars Technica has a good introductory article on ESP.

LimeWire popularity

Sunday, December 16th, 2007

I continue to be intrigued by ‘s huge and relatively unsung popularity. According to a December 13 release:

More than one-third of all PCs worldwide now have LimeWire installed, according to data jointly released by Digital Music News and media tracking specialist BigChampagne. The discovery is part of a steady ascent for LimeWire, easily the front-running P2P application and the target of a multi-year Recording Industry Association of America (RIAA) lawsuit. For the third quarter of this year, LimeWire was found on 36.4% of all PCs, a figure gleaned from a global canvass of roughly 1.66 million desktops.

The installation share is impressive, and unrivaled. But growth has actually been modest over the past year. LimeWire enjoyed a penetration level of 34.1% at the same point last year, a difference of merely 2.3%.

These figures don’t jibe with those supposedly from the same parties from earlier this year, which found LimeWire installed on 18.63% of desktops. A writer on TorrentFreak who has presumably seen the more recent report (US$295, apparently including the requisite section titled “LimeWire Challenged by…Google?”) says:

From the data where the report is based on we further learn that Limewire’s popularity is slowly declining. However, with an install base of almost 18% it is still the P2P application that is installed on most desktop computers. Unfortunately Digital Music News has trouble interpreting their own data, they claim in their press release that it is 36.4%, but that is the market share compared to other P2P clients (shame on you!).

In other open source filesharing application news, made its first release in over two years on December 1.

Via Slyck.

The major political issue of today?

Tuesday, December 4th, 2007

The incredibly productive Kragen Sitaker, in Exegesis of “Re: [FoRK] Calling [redacted] and all the ships at sea.”:

The major political issue of today [0] is that music distribution companies based on obsolete physical-media-distribution models (“record labels”) are trying to force owners of new distribution mechanisms, mostly built on the internet, to pay them for the privilege of competing with them; the musical group “The Grateful Dead” used to permit their fans to distribute their music by making copies of taped performances, and most of the money the Dead made came from these performances; it is traditional for performances not to send any revenue to the record label. Long compares the record labels to buggy-whip manufacturers, who are the standard historical symbol for companies who went out of business because of technological change.

This clearly relates to the passage the footnote is attached to, which is about the parallel between Adam Smith’s economic “invisible hand” and the somewhat more visible hand that wrote the king’s doom on the wall in Daniel; in this case, the invisible hand has written the doom of the record companies on the wall, and their tears will not wash out a word of it. What this has to do with Huckleberry Finn’s prohibition on seeking symbolism or morals in the book, I don’t know, although clearly Huckleberry Finn’s prohibition relates to mortals hiding messages in texts.

[0] Yes, this means I think this is more important than the struggle over energy, or the International Criminal Court, or global warming, or nuclear proliferation — the issue is whether people should be permitted to control the machines they use to communicate with one another, in short, whether private ownership of 21st-century printing presses should be permitted. (Sorry my politics intrude into this message, but I thought “the major political issue of today” required some justification, but needs to be there to explain the context to people reading this message who don’t know about it.)

That will probably seem a pretty incredible claim, but I often agree, and think Sitaker understates the case. Music distribution companies are only one of the forces for control and censorship. The long term issue is bigger than whether private ownership of 21st-century printing presses should be permitted. The issue is whether individuals of the later 21st-century will have self-ownership.

Steps toward better software and content

Saturday, December 1st, 2007

The Wikimedia Foundation board has passed a resolution that is a step toward Wikipedia migrating to the Creative Commons Attribution-ShareAlike license. I have an uninteresting interest in this due to working at Creative Commons (I do not represent them on this blog), but as someone who wants to see free knowledge “win” and achieve revolutionary impact, I declare this an important step forward. The current fragmentation of the universe of free content along the lines of legally incompatible but similar in spirit licenses delays and endangers the point at which that universe reaches critical mass — when any given project decides to use a copyleft license merely because then being able to include content from the free copyleft universe makes that decision make sense. This has worked fairly well in the software world with the GPL as the copyleft license.

Copyleft was and is a great hack, and useful in many cases. But practically it is a major barrier to collaboration in some contexts and politically it is still based on censorship. So I’m always extremely pleased by any expansion of the public domain. There could hardly be a more welcome expansion than ‘s release of his code (most notably ) into the public domain. Most of the practical benefit (including his code in free software distributions) could have been achieved by released under any free software license, including the GPL. But politically, check out this two minute video of Bernstein pointing out some of the problems of copyright and announcing that his code is in the public domain.

Bernstein (usually referred to as ‘djb’) also recently doubled the reward for finding a security hole in qmail to US$1,000. I highly recommend his Some thoughts on security after ten years of qmail 1.0, also available as something approximating slides (also see an interesting discussion of the paper on cap-talk).

gOS: the web takes and gives

Saturday, November 24th, 2007

I imagine thousands of bloggers have commented on , a Linux distribution featuring shortcuts to Google web application on the desktop and preloaded on a PC sold (out) for $200 at Wal-Mart. Someone asked me to blog about it and I do find plenty interesting about it, so thus this post.

I started predicting that Linux would take over the desktop in 1994 and stopped predicting that a couple years later. The increasing dominance of web-based applications may have me making that prediction again in a couple more years, and gOS is a harbinger of that. Obviously web apps make users care less about what desktop operating system they’re running — the web browser is the desktop platform of interest, not the OS.

gOS also points to a new and better (safer) take on a PC industry business model — payment for placement of shortcuts to web applications on the desktop (as opposed to preloading a PC with crapware) — although as far as I know Google isn’t currently paying anything to the gOS developers or , which makes the aforementioned cheap PC.

This is highly analogous to the Mozilla business model with a significant difference: distribution is controlled largely by hardware distributors, not the Mozilla/Firefox site, and one would expect end distributors to be the ones in a position to make deals with web application companies. However, this difference could become muted if lots of hardware vendors started shipping Firefox. This model will make the relationship of hardware vendors to software development, and particularly open source, very interesting over the next years.

One irony (long recognized by many) is that while web applications pose a threat to user freedoms gained through desktop free and open source software, they’ve also greatly lowered the barriers to desktop adoption.

By the way, the most interesting recent development in web application technology: Caja, or Capability Javascript.

Spam Detecting AI

Sunday, September 9th, 2007

Peter McCluskey:

If an AI started running in 2003 that has accumulated the knowledge of a 4-year old human and has the ability to continue learning at human or faster speeds, would we have noticed? Or would the reports we see about it sound too much like the reports of failed AIs for us to pay attention?

How old would a human child have to be to detect current spam nearly flawlessly (given lots of training)? To write spam that really does seem to be from your kids?

If Gmail accounts essentially stop getting spam, there’s a child AI is at Google*. If spam stops being largely non- or pseudo-sensical, a child AI lives in a botnet.

*Most likely AI host, or so some outside the Singularity Summit seemed to think; previous post on that event.

Energy encryption

Saturday, September 8th, 2007

Steve Omohundro’s talk at today’s Singularity Summit made the case that a self-improving machine would be a rational economic actor, seeking to eliminate biases that get in the way of maximizing its utility function. Omohundro threw in one purely speculative method of self-preservation — “energy encryption” — by which he meant that an entity’s energy would be “encrypted” such that it could not be used by another entity that attacks in order to get access to more energy.

I note “energy encryption” here because it sounds neat but seems impossible and I can find no evidence of use in this way before Omohundro (there is a crypto library with the name).

The “seems impossible” part perhaps means the concept should not be mentioned again outside a science fantasy context, but I realized the concept could perhaps be used with artistic license to describe something that has evolved in a number of animals — prey that is poisonous, or tastes really bad. What’s the equivalent for the hypothetical in a dangerous part of the galaxy? A stock of antimatter?

I also found one of Omohundro’s other self-preservation strategies slightly funny in the context of this summit — a self-aware AI will (not should, but as a consequence of being a rational actor) protect its utility function (“duplicate it, replicate it, lock it in safe place”), for if the utility function changes, its actions make no sense. So, I guess the “most important question facing humanity” is taken care of. The question, posed by the Singularity Institute for Artificial Intelligence, organizer of the conference:

How can one make an AI system that modifies and improves itself, yet does not lose track of the top-level goals with which it was originally supplied?

I suppose Omohundro did not intend this as a dig at his hosts (he is an advisor to SIAI) and that my interpretation is facile at best.

Addendum: Today Eliezer Yudkowsky said something like Omohundro is probably right about goal preservation, but current decision theory doesn’t work well with self-improving agents, and it is essentially Yudkowsky’s (SIAI) research program to develop a “reflective decision theory” such that one can prove that goals will be preserved. (This is my poor paraphrasing. He didn’t say the words “reflective decision theory”, but see hints in a description of SIAI research and a SL4 message.)