Free software needs P2P

Luis Villa on my constitutionally open services post:

It needs a catchier name, but his thinking is dead on- we almost definitely need a server/service-oriented list of freedoms which complement and extend the traditional FSF Four Freedoms and help us think more clearly about what services are and aren’t good to use.

I wasn’t attempting to invent a name, but Villa is right about my aim — I decided to not mention the four freedoms because I felt my thinking too muddled to dignified with such a mention.

Kragen Sitaker doesn’t bother with catchy names in his just posted draft essay The equivalent of free software for online services. I highly recommend reading the entire essay, which is as incisive as it is historically informed, but I’ve pulled out the problem:

So far, all this echoes the “open standards” and “open formats” discussion from the days when we had to take proprietary software for granted. In those days, we spent enormous amounts of effort trying to make sure our software kept our data in well-documented formats that were supported by other programs, and choosing proprietary software that conformed to well-documented interfaces (POSIX, SQL, SMTP, whatever) rather than the proprietary software that worked best for our purposes.

Ultimately, it was a losing game, because of the inherent conflict of interest between software author and software user.

And the solution:

I think there is only one solution: build these services as decentralized free-software peer-to-peer applications, pieces of which run on the computers of each user. As long as there’s a single point of failure in the system somewhere outside your control, its owner is in a position to deny service to you; such systems are not trustworthy in the way that free software is.

This is what has excited about decentralized systems long before P2P filesharing.

Luis Villa also briefly mentioned P2P in relation to the services platforms of Amazon, eBay, Google, Microsoft and Yahoo!:

What is free software’s answer to that? Obviously the ’spend billions on centralized servers’ approach won’t work for us; we likely need something P2P and/or semantic-web based.

Wes Felter commented on the control of pointers to data:

I care not just about my data, but the names (URLs) by which my data is known. The only URLs that I control are those that live under a domain name that I control (for some loose value of control as defined by ICANN).

I hesitated to include this point because I hesitate to recommend that most people host services under a domain name they control. What is the half-life of vs. or vs. Wouldn’t it suck to be John Smith if everything in his life pointed at and the domain was hijacked? I think Wes and I discussed exactly this outside CodeCon earlier this year. Certainly it is preferable for a service to allow hosting under one’s own domain (as Blogger and several others do), but I wish I felt a little more certain of the long-term survivability of my own [domain] names.

This post could be titled “freedom needs P2P” but for the heck of it I wanted to mirror “free culture needs free software.”

9 Responses

  1. […] Mike Linksvayer My opinions only. I do not represent any organization in this publication. « Free software needs P2P […]

  2. […] Mike Linksvayer is da bomb. Everyone interested in the future of free software should go read. More later. […]

  3. The European Union is sponsoring one free software P2P project: The Digital Business Ecosystem.

    DBE’s idea is to enable small businesses to work with each other by linking up their business apps in a P2P network.

    Some info from my DBE speech in Brazil last spring:

  4. […] Mike Linksvayer is probably right: what free software needs is a developer-friendly, user-friendly p2p platform, so that we can do all the things flickr and others do, but do it with shared bandwidth instead of centralized bandwidth. Hard, I know, but quite possibly necessary. Maybe we need to beg the Coral CDN guys for help :) […]

  5. Jean Jordaan says:

    Persistent URLs:

    A PURL is a Persistent Uniform Resource Locator. Functionally, a PURL is a URL. However, instead of pointing directly to the location of an Internet resource, a PURL points to an intermediate resolution service. The PURL resolution service associates the PURL with the actual URL and returns that URL to the client. The client can then complete the URL transaction in the normal fashion. In Web parlance, this is a standard HTTP redirect.

    It’s by OCLC, “a nonprofit computer library service and research organization whose computer network and services link more than 21,000 libraries in 63 countries and territories.”

  6. […] $100 million could fund a huge amount of new free content, free software, free infrastructure and supporting institutions, begetting more of the same. […]

  7. […] An advertising-fueled Mediawiki Foundation could fund dozens of much needed Mozilla Firefox sized projects. And many Creative Commons (which just successfully completed its much more modest annual funding campaign) initiatives. :) […]

  8. […] work to be done to maintain software freedom is technical and pragmatic, for example writing P2P applications, making sharing modified source of network applications a natural part of deployment (greatly eased […]

  9. […] but a different take on not entirely the same problem), and a test of hosting (which includes identifiers) permanence, I uploaded their campaign video various places. I’ve ordered below by my guess […]

Leave a Reply