Post Open Source

Open Data nuance

Sunday, October 7th, 2012

I’m very mildly annoyed with some discussion of “open data”, in part where it is an amorphous thing for which expectations must be managed, value found and sustainable business models, perhaps marketplaces, invented, all with an abstract and tangential relationship to software, or “IT”.

All of this was evident at a recent Open Knowledge Foundation meetup at the Wikimedia Foundation offices — but perhaps only evident to me, and I do not really intend to criticize anyone there. Their projects are all great. Nonetheless, I think very general discussion about open data tends to be very suboptimal, even among experts. Perhaps this just means general discussion is suboptimal, except as an excuse for socializing. But I am more comfortable enumerating peeves than I am socializing:

  • “Open” and “data” should sometimes be considered separately. “Open” (as in anyone can use for any purpose, as opposed to facing possible legal threat from copyright, database, patent and other “owners”, even their own governments, and their enforcement apparatuses) is only an expensive policy choice if pursued at too low a level, where rational ignorance and a desire to maintain every form of control and conceivable revenue stream rule. Regardless of “open” policy, or lack thereof, any particular dataset might be worthwhile, or not. But this is the most minor of my annoyances. It is even counterproductive to consider, most of the time — due to the expense of overcoming rational ignorance about “open” policy, and of evaluating any particular dataset, it probably makes a lot of sense to bundle “open data” and agitate for as much data to be made available under as good of practices as possible, and manage expectations when necessary.
  • To minimize the need to make expensive evaluations and compromises, open data needs to be cheap, preferably a side-effect of business as usual. Cheapness requires automation requires software requires open source software, otherwise “open data” institutions are themselves not transparent, are hostage to “enterprise software” companies, and are severely constrained in their ability to help each other, and to be helped by their publics. I don’t think an agitation approach is optimal (I recently attended an OpenOakland meeting, and one of the leaders said something like “we don’t hate proprietary software, but we do love open source”, which seems reasonable) but I am annoyed nevertheless by the lack of priority and acknowledgement given to software by “open data” (and even moreso, open access/content/education/etc) folk in general, strategic discussions (but, in action the Open Knowledge Foundation is better, having hatched valuable open source projects needed for open data). Computation rules all!
  • A “data marketplace” should not be the first suggestion, or even metaphor, for how to realize value from open data — especially not in the offices of the Wikimedia Foundation. Instead, mass collaboration.
  • Open data is neither necessary nor sufficient for better governance. Human institutions (inclusive of “public”, “private”, or any other categorization you like) have been well governed and atrociously governed throughout recorded history. Open data is just another mechanism that in some cases might make a bit of a difference. Another tool. But speaking of managing expectations, one should expect and demand good governance, or at least less atrocity, from our institutions, completely independent of open data!

“Nuance” is a vague joke in lieu of a useful title.

Exit tweet loyalty

Friday, September 21st, 2012

Someday I will read Exit, Voice, and Loyalty (1970) and comment on pertinence to things I write about here (cf my almost due for 8 year refutation notes on The Logic of Collective Action (1965)), but I have long found the concept intuitive.

The Declaration of Twitter Independence has been quickly ridiculed. In addition to its over the top language, one way to think about why is that it seems an almost certainly futile and maybe inappropriate (Twitter won’t listen, and perhaps shouldn’t; Twitter can do whatever they want with their services) attempt at voice, accompanied with a halfhearted at best exit plan (“explore alternate platforms, giving precedence to those who do support such [muddled] principles [until Twitter adopts a more developer friendly policy]”).

“Doing it right” per the crowd I’m most familiar with (including me) is almost all exit: start developing your apps for StatusNet/OStatus and other federated and open source social web software/protocols; any voice should demand support for federation, ie facilitate exit. Twitter apologists would say Twitter is doing the right thing for the Twitter ecosystem, the complainers should deal. Twitter loyal oppositionists would say Twitter is doing its greatness a disservice with its policies and should change. I’m not sure what people who care but are in neither the federated nor Twitter apologist/loyalist camps might think, but I’d like to know.

The Declaration doesn’t lend itself to a charitable reading, I think it is worth giving it one. Regarding its futile and perhaps inappropriate attempt at voice: it is OK for customers to complain; smart companies often even listen and adjust; Twitter is now a large organization, parts of it very smart; worth a try. Regarding exit, they don’t want to, and there isn’t anyplace completely obvious for them to go, much as I’d like that to be StatusNet/OStatus; “explore alternate platforms” and wanting no limits on how data can be used and shared, and data available in standard formats all support exit, with the right amount of tentativeness. Although that charitable reading is possible, the Declaration could’ve been written much more strongly regarding all of the points discussed above. Low probability that I’ll fork it to do so.

Collaborative Futures mentions exit, voice, and loyalty in the context of free collaboration projects. It appears from the history that I didn’t write that bit, though it covers a pet concept and uses a pet phrase (configurations). That chapter is way too short, but I’m pleased in retrospect with its nuance, or rather, with the charitable readings I’m able to give it.

When I eventually return to this topic, I will probably complain that software freedom and nearby advocates are overly focused on exit, with lots of untapped potential for the movements in voice and loyalty, possibly the same for political libertarians, and that it difficult to keep in mind more than two of exit, voice, and loyalty, and the frequency of their pairings.

In the meantime a post last year by Xavier Marquez on Exit, Voice, and Legitimacy: Responses to Domination in Political Thought seems pretty reasonable to me.

Question Software Freedom Day‽

Saturday, September 15th, 2012

If software freedom is important, it must be attacked, lest it die from the unremitting bludgeoning of obscurity and triviality. While necessary, I don’t particularly mean trivial attacks on overblown cleverness, offensive advocates, terminological nitpicking, obscurantism, fragmentation, poor marketing, lack of success, lack of diversity, and more. Those are all welcome, but mostly (excepting the first, my own gratuitously obscure, nitpicking and probably offensive partial rant against subversive heroic one-wayism) need corrective action such as Software Freedom Day and particularly regarding the last, OpenHatch.

I mostly mean attacking the broad ethical, moral, political, and utilitarian assumptions, claims, and predictions of software freedom. This may mean starting with delineating such claims, which are very closely coupled, righteous expressions notwithstanding. So far, software freedom has been wholly ignored by ethicists, moral philosophers, political theorists and activists, economists and other social scientists. Software freedom people who happen to also be one of the aforementioned constitute a rounding error.

But you don’t have to be an academic, activist, software developer, or even a computer user to have some understanding of and begin to critique software freedom, any more than one needs to be an academic, activist, businessperson, or voter to have some understanding of and begin to critique the theory and practice of business, democracy, and other such institutional and other social arrangements.

Computation does and will ever moreso underlay and sometimes dominate our arrangements. Should freedom be a part of such arrangements? Does “software freedom” as roughly promoted by the rounding error above bear any relation to the freedom (and other desirables; perhaps start with equality and security) you want, or wish to express alignment with?

If you want to read, a place to start are the seminal Philosophy of the GNU Project essays, many ripe for beginning criticism (as are many classic texts; consider the handful of well known works of the handful of philosophers of popular repute; the failure of humanity to move on is deeply troubling).

If you want to listen and maybe watch, presentations this year from Cory Doctorow (about, mp3) and Karen Sandler (short, long).

Law of headlines ending in a question mark is self-refuting in multiple ways. The interrobang ending signifies an excited fallibility, if the headline can possibly be interpreted charitably given the insufferable preaching that follows, this sentence included.

Try some free software that is new to you today. You ought to have LibreOffice installed even if you rarely use it in order to import and export formats whatever else you may be using probably can’t. I finally got around to starting a MediaGoblin instance (not much to see yet).

If you’re into software freedom insiderism, listen to MediaGoblin lead developer Chris Webber on the most recent Free as in Freedom podcast. I did not roll my eyes, except at the tangential mention of my ranting on topics like the above in a previous episode.

Opus!

Tuesday, September 11th, 2012

Opus is now an open source, royalty-free IETF standard. See Mozilla and Xiph announcements and congratulations to all involved.

This is a pretty big deal. It seems that Opus is superior to all existing audio codecs in quality and latency for any given bitrate. I will guess that for some large number of years it will be the no-brainer audio codec to use in any embedded application.

Will it replace the ancient (almost ancient enough for relevant patents to expire) but ubiquitous MP3 for non-embedded uses (i.e., where users can interact with files via multiple applications, such as on-disk music libraries)? If I were betting I’d have to bet no, but surely long-term it has a better chance than any free audio codec since Vorbis in the late 1990s. Vorbis never gained wide use outside some classes of embedded applications and free software advocates, but it surely played a big role in suppressing licensing demands from MP3 patent holders. Opus puts a stake through the heart of future audio codec licensing demands, unless some other monopoly can be leveraged (by Apple) to make another codec competitive.

Also, Opus is a great brand. Which doesn’t include an exclamation point. The title of this post merely expresses excitement.

I published an Opus-encoded file July 30. Firefox ≥15 supports Opus, which meant beta at the time, and now means general release.

To publish your own Opus encoded audio files, use opus-tools for encoding, and add a line like the below to your web server’s .htaccess file (or equivalent configuration):

AddType audio/ogg .opus

Hopefully the obvious large community sites (Wikimedia Commons and Internet Archive) will accept and support Opus uploads as soon as possible. Unlike their slow action on WebM. Speaking of which the Mozilla announcement mentions “working on the same thing for video”. I can’t tell whether this means submitting WebM (probably more specifically the VP8 codec) to the IETF or something else, but good luck and thank you in all cases. [Update: The proposed video codec charter starts from some requirements not mentioning any particular code; my wholly uniformed wild guess is that it will be another venue for VP8 and H.264 camps to argue.] [Update 20120913: Or maybe “same thing for video” means Daala.] [Update 20120914: Greg Maxwell comments with a precise answer below.]

Ride- and car-sharing and computers

Thursday, August 9th, 2012


Underemployed vehicles and land at Fruitvale BART parking lot, the 5th of 11 stations between me and Fremont.

Tuesday I attended Silicon Valley Automotive Open Source presentations on Car- and Ride-sharing. I heard of the group via its organizer, Alison Chaiken, who I noted in February gave the most important talk at LibrePlanet: Why Cars need Free Software.

The talks were non-technical, unlike I gather most previous SVAOS talks (this was the first event in Fremont, which is much more convenient for me than Santa Clara, where most previous talks have been held), but very interesting.

I did not realize how many car- and ride-sharing startups and other initiatives exist. Dozens (in Germany alone?) or hundreds of startups, and all manufacturers, rental companies, and other entities with fleets are at least thinking about planning something. That seems good on its own, and will provide good experience to take advantage of further more intensive/efficient use of vehicles to be enabled by robocars.

Carpooling and other forms of ride-sharing has gone up and down with fuel rationing and prices. Carsharing seems to go back to 1948 at least, but with slow growth, only recently becoming a somewhat mainstream product and practice. Ride- and car-sharing ought be complements. Sharing a taxi, shared vans, and even mass transit, could in some ways been seen as primitive examples of this complementarity.

Rationing is not in effect now, and real prices aren’t that high, so I imagine current activity must be mostly be a result of computers and communications making coordination more efficient. This is highlighted by the reliance and hope of startups and other initiatives on the web and mobile applications and in-car computers and communications for access, control, coordination, reputation, and tracking.

But none of this seems to be open source at the end-user service/product level. Certainly much or even most of it is built on open source components (web as usual, auto internals moving that way). These seem like important arenas to argue against security-through-obscurity in vehicles and their communications systems, and to demand auditability and public benefit for public systems in various senses (one of the startups suggested marketing their platform to municipal governments; if reputation systems are to eventually mediate day-to-day activities, they need scrutiny).

Free as in Software Freedom Law Shows

Wednesday, July 18th, 2012

In the latest Free as in Freedom podcast Karen Sandler and Bradley Kuhn play a recording of and discuss my FOSDEM law&policy presentation from back in February. The podcast covered all but one FOSDEM law&policy talk, see the archives.

I’m very happy with how this episode turned out. I managed to at least briefly include more points in a half hour than I recall having done, and Sandler and Kuhn manage to discuss far more of them than I would’ve hoped. Listen (ogg, mp3) and refer to slides (pdf, odp).

Further notes on two issues mentioned in the discussion follow.

Equality and Freedom

I’m glad that Sandler mentioned free software’s great equality story. But, I should say what I mean by that. I don’t primarily mean equal access, though that’s important. I mean contributing to reducing inequality of income, wealth, power. I’ve done precious little to articulate this, and I don’t know anyone else who has either, but there’s a reason it is the very first of my suggested considerations for future policy. Similarly, I think free software’s grand freedom story is not the proximate freedoms to run, study, modify, share software, but their role in protecting and promoting a free society. Again, much more needs to be said, provocatively (and that, critiqued, etc). Software freedom and nearby ought be claiming space in the commanding heights of political dialogue.

Hardware design licensing

I’m glad that Kuhn stated that he sees no reason for not using GPLv3 for hardware designs, and scoffs (privately, I suppose) at people making up new licenses for the same. As far as I know there are two papers that try to make the case for new hardware design licenses, and as far as I can tell they both fail. But, as far as I know no FLOSS establishment institution has proclaimed the correctness of using GPLv3 or a compatible license for hardware designs, nor explained why, nor reached out to open hardware folk when discussing new such licenses. How can this change? Perhaps such people should be alerted to copyleft-next. Perhaps I should be happy that hardware has been long ignored; one can imagine a universe with an equally twisted late 1990s vintage GNU FHL to accompany the GNU FDL.

Joke background

CC0, passports, and (a related one from Asheesh Laroia is told on the show) credit cards.

In 2009 Sandler and Kuhn interviewed me for the previous podcast, the Software Freedom Law Show. I did not blog about it then, but much of the discussion is probably still pertinent, if you wish to listen.

Copyleft.spin

Monday, July 9th, 2012

My post on 5 years of GPLv3 got picked up by LWN.net, see a medium-sized comment thread. There and elsewhere some people thought I was spinning or apologizing for GNU GPLv3. I can see why; I should’ve included more disclaimers. I certainly was spinning — some of my hobby horses, not for GPLv3. I shouldn’t expect anyone except maybe a few faithful readers and friends to accurately detect my hobby horses. Here are the relevant ones, sans GPLv3 anniversary dressing:

  • Copyleft is pro-sharing regulation.
  • But job 0 of all public licenses is negation of bad default regulation (copyright and patent).
  • Incompatibility, including across magisteria, hampers both the pull to accept pro-sharing regulation and the negation of bad default regulation (I don’t recall addressing the latter explicitly prior to my 5 years of GPLv3 post).

Thus (one of) my stated “metrics” for judging GPLv3, “number and preponderance of works under GPLv3-compatible terms.” I should have generalized: a marker for a license’s success is the extent to which it contributes to increasing the pool of works which may be used together without copyright as a barrier (but possibly as a pro-sharing regulatory enforcement mechanism). It makes sense to talk about this generalization specifically in terms of the pool of GPLv3-compatible works because it is the closest thing we have to a universal recipient (technically AGPLv3 is that, but it is less well known, I don’t know of any license that is AGPLv3 compatible but not GPLv3 compatible, and I’ll cover it further in a future post).

I said by this metric GPLv3 is doing pretty well. But that depends on where one wants to start from. GPLv3 allowed for the preexisting Apache2 to be compatible with it, and for MPL2 to be developed such that it could be. But maybe it was ridiculous for Apache2 to have been released without being GPLv2 compatible. And I consider GPLv2’s option for incompatibility with future versions a gross error (which GPLv3 repeated; it just isn’t felt right now as there probably will be no GPLv4 for a very long time). One could reasonably argue that the only useful development in licensing in the history of free and open source licensing is BSD3 dropping BSD4’s advertising clause (and to a lesser extent BSD2 also dropping a no endorsement clause — note that BSD# refers to the number of clauses present, not versions, and note that one could argue that people should’ve used similar licenses like MIT and ISC without the problematic clauses all along). But given that previous history could not be rewritten in 2006 and 2007, I think GPLv3 has done pretty well, and at the very least could have done a whole lot worse.

Could GPLv3 have done a whole lot better? I appreciate that some people would’ve preferred no versioning, or perhaps a conservative versioning which could have achieved Apache2 and eventually MPL2 compatibility and have been uncontroversial for all GPLv2 licensors to move to. That would have to weighed against the additional (and controversial, e.g., anti-Tivoization) measures GPLv3 actually has, and as I said in my previous post, I think it will take a long time to be able to judge how big of an instrumental effect those have toward software freedom. But again, in 2012 previous history can’t be rewritten, though consideration of such hypotheticals could possibly help us make improved choices in the future, which is being written now: version 4.0 of various Creative Commons licenses are in development (in addition to the somewhat abstract hobby horses listed above, my previous post can be read as an abstract case for specific CC BY-SA 4.0->GPLv3 compatibility) and Richard Fontana has just started drafting Copyleft.next.

There’s much interesting about Copyleft.next, but regarding compatibility, I like that it is explicitly compatible with any version of GNU GPL. Copyleft license innovation (which admittedly one ought be extremely skeptical of; see above) and critique-in-a-license is possible while avoiding the obvious harm of incompatibility — declare explicit compatibility. Such can even be a sort of pro-compatibility agitation, of which I consider the Italian Open Data License 1.0 (explicitly compatible with both CC BY-SA and ODbL, which are mostly incompatible with each other) an example.

In case anyone thinks I’m still spinning or apologizing for GPLv3, let me state that over the past 5 years I’ve grown increasingly skeptical about copyleft-via-copyright-licenses (but I still think it has a useful role, especially given inability to rewrite history, and cheer its enforcement; if you don’t like enforcement, use a permissive instrument). I’ll write a post about why in the fullness of time, but for now, the executive summary: institutions, obscurity, politics, stupidity.

To close on an appreciative, celebratory note: forget analysis of GPLv3’s net impact — it is a fine, elegant, and surprisingly readable document. Especially in comparison to extant Creative Commons licenses, which lack both clarity of purpose and readability (according to various readability metrics, CC 4.0 licenses will probably catch up to and maybe surpass GPLv3 on readability). Compatibility aside, one of my major tendencies in advising on CC 4.0 is to advocate for copying GPLv3 in a number of places. I probably won’t ever get to it, but I could do a post or series of posts on the sentences and strategies in GPLv3 that I think are great. Also, check out the FSF’s GPLv3 turns 5 post featuring a cake.

5 years of GPLv3

Friday, June 29th, 2012

5louds

Version 3 of the GNU GPL was released 5 years ago today. How successful the license is and will be may become more clear over the next 5 years. Use relative to other free software licenses? Good data and analysis are difficult. The importance of v3’s innovations in protecting and promoting users’ freedoms in practice? Will play out over many years. More software freedom and indeed, general welfare, than in a hypothetical world without GPLv3? Academic questions, and well worth considering.

I suggest that number (add qualifiers of and scaling by importance, quality, etc, as you wish) of works under GPLv3 or use of GPLv3 relative to other licenses are less important markers of GPLv3’s success, and that of the broader FLOSS community, than the number and preponderance of works under GPLv3-compatible terms. Although it is a relatively highly regulatory license, its first and most important job is the same as that of permissive and public domain instruments — grant all permissions possible around default restrictions imposed by current and future bad public policy.

Incompatibility among free licenses means that the licenses have failed at their most important jobs for any case in which one wishes to use works under incompatible terms together in a way that default bad policy restricts. That such cases may currently be edge cases, or even unknown, is a poor excuse for incompatibility. Remember that critique of current bad policy includes the restrictions it places on serendipitous uses and uses in the distant future!

On this number-and-preponderance-of-GPLv3-compatible-works metric, the license and free software community look pretty good (note that permissive licenses such as MIT and BSD, visibly popular among web developers, are GPL-compatible). Probably the most important incompatible terms are GPLv2-only and EPL. But software is suffusing everything, including hardware design, cultural/scientific/documentation works, and data. I hope to see major progress toward eliminating barriers across these overlapping domains in the next years.

Open Source Semiconductor Core Licensing → GPL hardware?

Saturday, May 12th, 2012

In Open Source Semiconductor Core Licensing (pdf; summary) Eli Greenbaum considers when use of the semiconductor core designs under the GPL would make designs of chips and devices, and possibly physical objects based on those designs, trigger GPL requirements to distribute design for a derived work under the GPL.

It depends of course, but overall Greenbaum’s message for proprietary hardware is exactly the same as innumerable commentators’ messages for proprietary software:

  • If you use any GPL work, be extremely careful to isolate that use in ways that minimize the chances one could successfully claim your larger work triggers GPL requirements;
  • Excluding GPL work would be easier; if you want to incorporate open source works, consider only LGPL (I don’t understand why Greenbaum didn’t mention permissive licenses, but typically they’d be encouraged here).

Greenbaum concludes:

The semiconductor industry has been moving further toward the use of independently developed cores to speed the creation of new devices and products. However, the need for robustly maintained and supported cores and the absence of clear rules and licenses appropriate for the industry’s structure and practice have stymied the development of an open source ecosystem, which might otherwise have been a natural outgrowth of the use of independently developed cores. The development of a context-specific open source license may be the simplest way to clarify the applicable legal rules and encourage the commercial use of open source cores.

That’s something like what John Ackermann wanted to show more generally for hardware designs in a paper I’ve written about before. Each leaves me unconvinced:

  • If one wants copyleft terms, whether to protect a community or proprietary licensing revenue, use the GPL, which gives you plenty of room to aggressively enforce as and if you wish;
  • If you don’t want copyleft terms, use a permissive license such as the Apache License 2.0 (some people understand this but still think version tweaked for hardware is necessary; I’m skeptical of that too).

Greenbaum does mention Ackermann’s paper and TAPR license and other “open hardware” licenses I previously discussed in a footnote:

While “open hardware” licenses do exist, they do not take account of many of the complexities of the semiconductor device manufacturing process. For example, the TAPR Open Hardware License does not address the use of technology libraries, the incorporation of soft cores in a device design, or the use of independent contractors for part s of the design
process.

I think this highlights a difference of perspective. “Open hardware” people inclined toward copyleft want licenses which even more clearly than the GPL impose copyleft obligations on entities that build on copylefted designs. Greenbaum doesn’t even sketch what a license he’d consider appropriate for the industry would look like, but I’m doubtful that a license tailored to enabling some open collaboration but protecting revenues in industry-specific ways would be considered free or open by many people, or be used much.

I suspect the reason open hardware has only begun taking off recently (and will be huge soon) and open semiconductor design not yet (though for both broad and narrow categories people have been working on it for well over a decade) has almost nothing to do with the applicability of widely used licenses (which are far from ideal even for software, but network effects rule) and everything to do with design and production technologies that make peer production a useful addition.

Although I think the conclusion is weak (or perhaps merely begs for a follow-up explaining the case), Greenbaum’s paper is well worth reading, in particular section VI. Distribution of Physical Devices, which makes the case the GPL applies to such based on copyright, contract, and copyright-like restrictions and patent. These are all really important issues for info/innovation/commons governance to grapple with going forward. My hope is that existing license stewards take this to heart (e.g., do serious investigations of how GPLv3+ and Apache 2.0 can be best used for designs, and take what is learned and what the relevant communities say when in the fullness of time the next versions of those licenses are developed; the best contribution Creative Commons can probably make is to increase compatibility with software licenses and disrecommend direct use of CC licenses for designs as it has done for software) and that newer communities not operate in an isolated manner when it comes to commons governance.

[e]Book escrow

Thursday, May 10th, 2012

I had no intention of writing yet another post about DRM today. But a new post on Boing Boing, Libraries set out to own their ebooks, has some of the same flavor as some of the posts I quoted yesterday and is a good departure (for making a few more points, and not writing any more about the topic for some time).

Today’s Boing Boing post (note their Day Against DRM post from last week) says a library in Colorado is:

buying eBooks directly from publishers and hosting them on its own platform. That platform is based on the purchase of content at discount; owning—not leasing—a copy of the file; the application of industry-standard DRM on the library’s files; multiple purchases based on demand; and a “click to buy” feature.

I think that’s exactly what Open Library is doing (maybe excepting “click to buy”; not sure what happened to “vending” mentioned when BookServer was announced). A letter to publishers from the library is fairly similar to the Internet Archive’s plea of a few days ago. Exceprt:

  • We will attach DRM when you want it. Again, the Adobe Content Server requires us to receive the file in the ePub format. If the file is “Creative Commons” and you do not require DRM, then we can offer it as a free download to as many people as want it. DRM is the default.
  • We will promote the title. Over 80% of our adult checkouts (and we checked out over 8.2 million items last year) are driven by displays. We will present e-content data (covers and descriptions) on large touch screens, computer catalogs, and a mobile application. These displays may be “built” by staff for special promotions (Westerns, Romances, Travel, etc.), automatically on the basis of use (highlighting popular titles), and automatically through a recommendation engine based on customer use and community reviews.
  • We will promote your company. See a sample press release, attached.

I did not realize libraries were so much like retail (see “driven by displays”). Disturbing, but mostly off-topic.

The letter lists two concerns, both financial. Now: give libraries discounts. Future: allow them to sell used copies. DRM is not a concern now, nor for the future. As I said a couple days ago, I appreciate the rationale for making such a deal. Librarian (and Wikimedian, etc) Phoebe Ayers explained it well almost exactly two years ago: benefit patrons (now). Ok. But this seems to me to fit what ought to be a canonical definition of non-visionary action: choosing to climb a local maximum which will be hard to climb down from, with higher peaks in full view. Sure, the trails are not known, but must exist. This “vision” aspect is one reason Internet Archive’s use of DRM is more puzzling than local libraries’ use.

Regarding “owning—not leasing—a copy of the file”, I now appreciate more a small part of the Internet Archive’s recent plea:

re-format for enduring access, and long term preservation

Are libraries actually getting books from publishers in formats ideal for these tasks? I doubt it, but if they are, that’s a very significant plus.

I dimly recall source code escrow being a hot topic in software around 25 years ago. (At which time I was reading industry rags…at my local library.) I don’t think it has been a hot topic for a long time, and I’d guess because the ability to run the software without a license manager, and to inspect, fix, and share the software right now, on demand, rather than as a failsafe mechanism, is a much, much better solution. Good thing lots of people and institutions over the last decades demanded the better solution.