Post Open Standards

Free/Libre/Open Formats/Protocols/Standards

IE6 is a stark reminder to developers of what a Web monoculture looks like. We need to remember it.

Tuesday, March 22nd, 2011

The title of this post quotes Evan Prodromou.

See Internet Explorer 6 criticism for the baneful details.

Firefox broke the monoculture. Today is a good day to remember, and celebrate, as Firefox 4 is released. I’ve been using alphas and betas for many months; highly recommend.

Given the quote from Prodromou, founder of identi.ca/StatusNet and first among federated social web equals, it’s also a good idea to remember that many of the services that dominate their niches on the web are themselves monocultures. It was really great yesterday to see the EFF explain and get behind the federated social web.

Be a good citizen today — here’s another helpful and current link in that regard.

Us Autonomo!

Monday, July 14th, 2008

Autonomo.us and the Franklin Street Statement on Freedom and Network Services launched today.

I’ve written about the subject of this group and statement a number of times on this blog, starting with Constitutionally Open Services two years ago. I think that post holds up pretty well. Here were my tentative recommendations:

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.

I was honored to participate in a summit called by the Free Software Foundation to discuss these issues March of this year, along with far greater thinkers and doers. Autonomo.us and the Franklin Street Statement (named for the FSF’s office address) are the result of continued work among the summit participants, not yet endorsed by the FSF (nor by any other organization). Essentially everything I conjectured above made it into the statement (not due to me, they are fairly obvious points, at least as of 2008, and others made them long before) with the exception of making deployment easier, which is mundane, and service governance issues, which the group did discuss, but inconclusively.

There’s much more to say about this, but for now (and likely for some time, at the rate I write, though this activity did directly inspire me to propose speaking at an upcoming P2P industry summit, which I will early next month–I’m also speaking tomorrow at BALUG and will mention autonomo.us briefly–see info on both engagements) I wanted to address two immediate and fairly obvious critiques.

Brian Rowe wrote:

“Where it is possible, they should use Free Software equivalents that run on their own computer.” This is near Luddite talk… It is almost always possible to use an app on your own comp, but it is so inefficient. Networked online apps are not inherently evil, should you back up your work
offline, yes. Should you have alternative options and data portability, yes. You should fight to impove them. But you should not avoid them like the plauge.

The statement doesn’t advocate avoiding network services–see “Where it is possible”, and most of the statement concerns how network services can be free. However, it is easy to read the sentence Rowe quoted and see Luddism. I hope that to some it instead serves as a challenge, for:

  • Applications that run on your own computer can be networked, i.e., P2P.
  • Your own computer does not only include your laptop and home server, but any hardware you control, and I think that should often include virtual hardware.

Wes Felter wrote:

I see a lot about software licensing and not much about identity and privacy. I guess when all you have is the AGPL everything looks like a licensing problem.

True enough, but lots of people are working on identity and privacy. If the FSF doesn’t work on addressing the threats to freedom as in free software posed by network services, it isn’t clear who would. And I’d suggest that any success free software has in the network services world will have beneficial effects on identity and privacy for users–unless you think these are best served by identity silos and security through obscurity.

Finally, the FSF is an explicitly ideological organization (I believe mostly for the greater good), so the statement (although not yet endorsed by the FSF, I believe all participants are probably FSF members, staff, or directors) language reflect that. However, I suspect by far the most important work to be done to maintain software freedom is technical and pragmatic, for example writing P2P applications, making sharing modified source of network applications a natural part of deployment (greatly eased by the rise of distributed version control), and convincing users and service providers that it is in their interest to expect and provide free/open network services.

I suggest going on to read Evan Prodromou (the doer above) on autonomo.us and the Franklin Street Statement and Rufus Pollock on the Open Software Service Definition, which more or less says the same thing as the FSS in the language of a definition (and using the word open), coordinated to launch at the same time.

Commoditizing the cloud

Wednesday, April 9th, 2008

Doug Cutting on Cloud: commodity or proprietary?:

As we shift applications to the cloud, do we want our code to remain vendor-neutral? Or would we rather work in silos, where some folks build things to run in the Google cloud, some for the Amazon cloud, and others for the Microsoft cloud? Once an application becomes sufficiently complex, moving it from one cloud to another becomes difficult, placing folks at the mercy of their cloud provider.

I think most would prefer not to be locked-in, that cloud providers instead sold commodity services. But how can we ensure that?

If we develop standard, non-proprietary cloud APIs with open-source implementations, then cloud providers can deploy these and compete on price, availability, performance, etc., giving developers usable alternatives.

That’s exactly right. Cloud providers (selling virtualized cpu and storage) are analogous to hardware vendors. We’re in the pre-PC era, when a developer must write to a proprietary platform, and if one wants to switch vendors, one must port the application.

But such APIs won’t be developed by the cloud providers. They have every incentive to develop proprietary APIs in order to lock folks into their services. Good open-source implementations will only come about if the community makes them a priority and builds them.

I think this is a little too pessimistic. Early leaders may have plenty of incentive to create lockin, but commoditization is another viable business model, one that could even be driven by a heretofore leading proprietary vendor, e.g., the IBM PC, or Microsoft-Yahoo!

Of course the community should care and build the necessary infrastructure so that it is available to enable a potential large cloud provider to pursue the commoditization route and to provide an alternative so long as no such entity steps forward.

Cutting has been working on key parts of the necessary infrastructure; read the rest of his post for more.

End Software Patents

Sunday, March 2nd, 2008

I strongly prefer voluntary action. However, software patents are not amenable to workaround and so must be attacked directly through less savory legal, legislative, and electoral routes (though if software patents are toxic to free software, the opposite is also true, so simply creating and using free software is a voluntary if indirect attack on software patents).

Software patents are the major reason multimedia on the web (and on computers generally) is so messed up — few multimedia formats may be implemented without obtaining many patent licenses, and amazingly, this is sometimes impossible:

[The framework] is so patent-encumbered that today no one really knows who has “rights” to it. Indeed, right now, no new MPEG-4 licenses are even being issued.

As the End Software Patents site emphasizes, software patents negatively impact every sector now that everything uses software.

My only problem with the ESP site (and many others, this is just a general peeve of mine) is that it does not even link to similar resources with a non-U.S. jurisdiction focus. For example, the What Can I Do? page might state that if one is reading the page but not in the U.S. (because that never happens), please check out FFII (EU) and similar.

In any case, please join the effort of ESP and others to eradicate software patentsweapons of mass destruction. Ars Technica has a good introductory article on ESP.

OpenID is good for something

Monday, December 31st, 2007

I think I’ve only posted about it once, but I’ve long been extremely skeptical of “digital identity” technologies — evil, hopeless, overhyped (no, giving users control of their identities will not save democracy nor make a pony appear, and there are no scare quotes around the preceding words because I haven’t cornered the market on scare quotes), often more than one of these.

has been the most reasonable identity technology to come along, mostly because it does very little and builds on existing standards. I still think it’s overhyped. Evan Prodromou recently posted an informative essay on OpenID Privacy Concerns. This bit jumped out at me:

The key to mitigating this, of course, is using strong security on the OpenID provider. The good news is that since your authentication is centralized, you can use much stronger authentication than most Web sites support. I really appreciate using browser certificate authentication on certifi.ca — it’s a very strong system that’s (almost) immune to phishing, brute-force attacks, or other password-stealing scams.

The good thing about OpenID is that it moves authentication to parties that are presumably good at that and can offer stronger authentication methods, without the sites and services you want to login to having to know anything about authentication technologies (apart from having implemented OpenID login).

I knew that an OpenID provider could authenticate however they want, but the usefulness of this did not click until reading the above, though I’m sure it’s been pointed out to me before.

I fairly frequently use the total lack of adoption of browser certificates as a negative example to be learned from when people try to solve supposed problems by throwing crypto into a supposed solution. Perhaps in the distant future this example won’t work, because OpenID (or something else that abstracts out authentication method) is widely implemented, making strong authentication relatively useful and usable.

In the meantime, I’m still a big fan of super simple methods of going passwordless.

Peer producing think tank transparency

Wednesday, October 31st, 2007

Hack, Mash & Peer: Crowdsourcing Government Transparency from the looks like a reasonable exhortation for the U.S. jurisdiction government to publish data in so that government activities may be more easily scrutinized. The paper’s first paragraph:

The federal government makes an overwhelming amount of data publicly available each year. Laws ranging from the Administrative Procedure Act to the Paperwork Reduction Act require these disclosures in the name of transparency and accountability. However, the data are often only nominally publicly available. First, this is the case because it is not available online or even in electronic format. Second, the data that can be found online is often not available in an easily accessible or searchable format. If government information was made public online and in standard open formats, the online masses could be leveraged to help ensure the transparency and accountability that is the reason for making information public in the first place.

That’s great. But if peer produced (a more general and less inflammatory term than crowdsourced; I recommend it) scrutiny of government is great, why not of think tanks? Let’s rewrite that paragraph:

Think tanks produce an overwhelming number of analyses and policy recommendations each year. It is in the interest of the public and the think thanks that these recommendations be of high quality. However, the the data and methodology used to produce these positions are often not publicly available. First, this is the case because the data is not available online or even in electronic format. Second, the analysis that can be found online is often not available in an easily accessible or searchable format. Third, nearly everything published by think tanks is copyrighted. If think tank data and analysis was made public online in standard open formats and under open licenses, the online masses could be leveraged to help ensure the quality and public benefit of the policy recommendations that are the think tanks’ reason for existing in the first place.

Think tanks should lead by example, and improve their product to boot. Note the third point above: unlike , the output of think tanks (and everyone else) is restricted by copyright. So think tanks need to take an to ensure openness.

(Actually think tanks only need to lead in their domain of political economy — by following the trails blazed by the movement in scientific publishing.)

This is only the beginning of leading by example for think tanks. When has a pro-market think tank ever subjected its policy recommendations to market evaluation?

Via Reason.

SXSW: Mozilla good bits

Saturday, March 17th, 2007

I missed Tuesday morning’s Browser Wars Retrospective: Past, Present and Future Battlefields for sleep and the Creative Commons moderated Open Knowledge vs. Controlled Knowledge, but noticed two very interesting items from Mozilla CTO Brendan Eich’s blog post:

I am pushing to make add-on installation not require a restart in Firefox 3, and I intend to help improve and promote GreaseMonkey security in the Firefox 3 timeframe too.

Please do! Drop all other Firefox 3 features if necessary.

And from Eich’s sixth slide:

Working with Opera via WHATWG on <video>

  • Unencumbered Ogg Theora decoder in all browsers
  • Ogg Vorbis for <audio>
  • Other formats possible
  • DHTML player controls

I’ve barely thought about <audio> and <video> but if their presence could encourage non-obfuscated media URLs I’m predisposed in their favor, but universal deployment of unencumbered audio and video decoders via browsers would be excellent.

Constitutionally open services

Thursday, July 6th, 2006

Luis Villa provokes, in a good way:

Someone who I respect a lot told me at GUADEC ‘open source is doomed’. He believed that the small-ish apps we tend to do pretty well will migrate to the web, increasing the capital costs of delivering good software and giving next-gen proprietary companies like Google even greater advantages than current-gen proprietary companies like MS.

Furthermore:

Seeing so many of us using proprietary software for some of our most treasured possessions (our pictures, in flickr) has bugged me deeply this week.

These things have long bugged me, too.

I think Villa has even understated the advantage of web applications — no mention of security — and overstated the advantage of desktop applications, which amounts to low latency, high bandwidth data transfer — let’s see, , including video editing, is the hottest thing on the web. Low quality video, but still. The two things client applications still excel at are very high bandwidth, very low latency data input and output, such as rendering web pages as pixels. :)

There are many things that can be done to make client development and deployment easier, more secure, more web-like and client applications more collaboration-enabled. Fortunately they’ve all been tried before (e.g., , , , others of varying relevance), so there’s much to learn from, yet the field is wide open. Somehow it seems I’d be remiss to not mention , so there it is. Web applications on the client are also a possibility, though typical only address ease of development and not manageability at all.

The ascendancy of web applications does not make the desktop unimportant any more than GUIs made filesystems unimportant. Another layer has been added to the stack, but I am still very happy to see any move of lower layers in the direction of freedom.

My ideal application would be available locally and over the network (usually that means on the web), but I’ll prefer the latter if I have to choose, and I can’t think of many applications that don’t require this choice (fortunately is one of them, or close enough).

So what can be done to make the web application dominated future open source in spirit, for lack of a better term?

First, web applications should be super easy to manage (install, upgrade, customize, secure, backup) so that running your own is a real option. Applications like and have made large strides, especially in the installation department, but still require a lot of work and knowledge to run effectively.

There are some applications that centralizaton makes tractable or at least easier and better, e.g., web scale search, social aggregation — which basically come down to high bandwidth, low latency data transfer. Various P2P technologies (much to learn from, field wide open) can help somewhat, but the pull of centralization is very strong.

In cases were one accepts a centralized web application, should one demand that application be somehow constitutionally open? Some possible criteria:

  • All source code for the running service should be published under an open source license and developer source control available for public viewing.
  • All private data available for on-demand export in standard formats.
  • All collaboratively created data available under an open license (e.g., one from Creative Commons), again in standard formats.
  • In some cases, I am not sure how rare, the final mission of the organization running the service should be to provide the service rather than to make a financial profit, i.e., beholden to users and volunteers, not investors and employees. Maybe. Would I be less sanguine about the long term prospects of Wikipedia if it were for-profit? I don’t know of evidence for or against this feeling.

Consider all of this ignorant speculation. Yes, I’m just angling for more freedom lunches.