Post Open Services

LegacyOffice?

Wednesday, July 30th, 2014

LibreOffice 4.3 announcement, release notes with new feature screenshots, developer perspective. Perhaps most useful, a feature comparison of LibreOffice 4.3 and Microsoft Office 2013.

Overall a great six month release. Coming early next year: 4.4.

Steady progress is also being made on policy. “The default format for saving [UK] government documents must be Open Document Format (ODF)” — the genuinely open standard used by LibreOffice; Glyn Moody has a good writeup. I occasionally see news of large organizations migrating to LibreOffice, most recently the city of Tolouse. Hopefully many more will manage to escape “effective captivity” to a single vendor (Glyn Moody for that story too).

(My take on the broad importance of open policy and software adoption.)

Also, recent news of work on a version of LibreOffice for Android. But nothing on LibreOffice Online (in a web browser) which as far as I can tell remains a prototype. WebODF is an independent implementation of ODF viewing and editing in browser. Any of these probably require lots of work to be as effective of a collaboration solution as Google Docs — much of the work outside the editing software, e.g. integration with “sharing” mechanisms (e.g., WebODF and ownCloud) and ready availability of deployments of those mechanisms (Sandstorm is promising recent work on deployment, especially security).

From what I observe, Google Docs has largely displaced (except for large or heavily formatted for print or facsimile) Microsoft Office, though I guess that’s not the case in large organizations with internal sharing mechanisms. I suspect Google Docs (especially spreadsheets) has also expanded the use of “office” software, in part replacing wiki use cases. Is there any reason to think that free/open source software isn’t as far behind now as it was in 2000 before the open source release of OpenOffice.org, LibreOffice’s predecessor?

Still, I consider LibreOffice one of the most important free software projects. I expect it will continue to be developed and used on millions of “legacy” desktops for decades after captivity to Microsoft is long past, indeed after desktop versions of Microsoft Office long EoL’d. Hopefully LibreOffice’s strong community, development, governance, and momentum (all vastly improved over OpenOffice.org) in combination with open policy work (almost non-existent in 2000) and other projects will obtain much better than even this valuable result.

Open policy for a secure Internet-N-Life

Saturday, June 28th, 2014

(In)Security in Home Embedded Devices Jim Gettys says software needs to be maintained for decades considering where it is being deployed (e.g., embedded in products with multi-decade lifetimes, such as buildings) and the criticality of some of that software, an unpredictable attribute — a product might become unplanned “infrastructure” for example if it is widely deployed and other things come to depend on it. Without maintenance, including deployment of updates in the field, software (and thus systems it is embedded in) becomes increasingly insecure as vulnerabilities are discovered (cites a honeymoon period enjoyed by new systems).

This need for long-term maintenance and field deployment implies open source software and devices that users can upgrade — maintenance needs to continue beyond the expected life of any product or organization. “Upgrade” can also mean “replace” — perhaps some kinds of products should be more modular and with open designs so that parts that are themselves embedded systems can be swapped out. (Gettys didn’t mention, but replacement can be total. Perhaps “planned obsolescence” and “throwaway culture” have some security benefits. I suspect the response would be that many things continue to be used for a long time after they were planned to be obsolete and most of their production run siblings are discarded.)

But these practices are currently rare. Product developers do not demand source from chip and other hardware vendors and thus ship products with “binary blob” hardware drivers for Linux kernel which cannot be maintained, often based on kernel years out of date when product is shipped. Linux kernel near-monoculture for many embedded systems, increasing security threat. Many problems which do not depend on hardware vendor cooperation, ranging from unintentionally or lazily not providing source needed for rest of system, to intentionally shipping proprietary software, to intentionally locking down device to prevent user updates. Product customers do not demand long-term secure devices from product developers. There is little effort to fund commons-oriented embedded development (in contrast with Linux kernel and other systems development for servers, which many big companies fund).

Gettys is focused on embedded software in network devices (e.g., routers) as network access is critical infrastructure much else depends on, including the problem at hand: without network access, many other systems cannot be feasibly updated. He’s working on CeroWrt a cutting edge version of OpenWrt firmware, either of which is several years ahead of what typically ships on routers. A meme Gettys wishes to spread, the earliest instance of which I could find is on cerowrt-devel, a harsh example coming the next week:

Friends don’t let friends run factory firmware.

Cute. This reminds me of something a friend said in a group discussion that touched on security and embedded in body (or perhaps it was mind embedded in) systems, along the lines of “I wouldn’t run (on) an insecure system.” Or malware would give you a bad trip.

But I’m ambivalent. Most people, thus most friends, don’t know what factory firmware is. Systems need to be much more secure (for the long term, including all that implies) as shipped. Elite friend advice could help drive demand for better systems, but I doubt “just say no” will help much — its track records for altering mass outcomes, e.g., with respect to proprietary software or formats, seems very poor.

In Q&A someone asked about centralized cloud silos. Gettys doesn’t like them, but said without long-term secure alternatives that can be deployed and maintained by everyone there isn’t much hope. I agree.

You may recognize open source software and devices that users can upgrade above as roughly the conditions of GPL-3.0. Gettys mentioned this and noted:

  • It isn’t clear that copyright-based conditions are effective mechanism for enforcing these conditions. (One reason I say copyleft is a prototype for more appropriate regulation.)
  • Of “life, liberty, and pursuit of happiness”, free software has emphasized the latter two, but nobody realized how important free software would be for living one’s life given the extent to which one interacts with and depends on (often embedded) software. In my experience people have realized this for many years, but it should indeed move to the fore.

Near the end Gettys asked what role industry and government should have in moving toward safer systems (and skip the “home” qualifier in the talk title; these considerations are at least as important for institutions and large-scale infrastructure). One answer might be in open policy. Public, publicly-interested, and otherwise coordinated funders and purchasers need to be convinced there is a problem and that it makes sense for them to demand their resources help shift the market. The Free Software Foundation’s Respects Your Freedom criteria (ignoring the “public relations” item) is a good start on what should be demanded for embedded systems.

Obviously there’s a role for developers too. Gettys asked how to get beyond the near Linux kernel monoculture, mentioning BSD. My ignorant wish is that developers wanting to break the monoculture instead try to build systems using better tools, at least better languages (not that any system will reduce the need for security in depth).

Here’s to a universal, secure, and resilient web and technium. Yes, these features cost. But I’m increasingly convinced that humans underinvest in security (not only computer, and at every level), especially in making sure investments aren’t theater or worse.

{ "title" : "API commons II" }

Tuesday, June 24th, 2014

API Voice:

Those two posts by API Evangelist (another of his sites) Kin Lane extract bits of my long post on these and related matters, as discussed at API Con. I’m happy that even one person obtained such clear takeaways from reading my post or attending the panel.

Quick followups on Lane’s posts:

  • I failed to mention that never requiring permission to implement an API must include not needing permission to reverse engineer or discover an undocumented API. I do not know whether this implies in the context of web service APIs has been thoroughly explored.
  • Lane mentions a layer that I missed: the data model or schema. Or models, including for inputs and outputs of the API, and of whatever underlying data it is providing access to. These may fall out of other layers, or may be specified independently.
  • I reiterate my recommendation of the Apache License 2.0 as currently the best license for API specifications. But I really don’t want to argue with pushing CC0, which has great expressive value even if it isn’t absolutely perfect for the purpose (explicit non-licensing of patents).

API commons

Thursday, May 29th, 2014

Notes for panel The API Copyright Emergency: What’s Next? today at API Con SF. The “emergency” is the recent decision in Oracle v. Google, which I don’t discuss directly below, though I did riff on the ongoing case last year.

I begin with and come back to a few times Creative Commons licenses as I was on the panel as a “senior fellow” for that organization, but apart from such emphasis and framing, this is more or less what I think. I got about 80% of the below in on the panel, but hopefully still worth reading even for attendees.

A few follow-up thoughts after the notes.

Creative Commons licenses, like other public licenses, grant permissions around copyright, but as CC’s statement on copyright reform concludes, licenses “are not a substitute for users’ rights, and CC supports ongoing efforts to reform copyright law to strengthen users’ rights and expand the public domain.” In the context of APIs, default policy should be that independent implementation of an API never require permission from the API’s designer, previous implementer, or other rightsholder.

Without such a default policy of permission-free innovation, interoperability and competition will suffer, and the API community invites late and messy regulation at other levels intending to protect consumers from resulting lock-in.

Practically, there are things API developers, service providers, and API consumers can do and demand of each other, both to protect the community from a bad turn in default policy, and to go further in creating a commons. But using tools such as those CC provides, and choosing the right tools, requires looking at what an API consists of, including:

  1. API specification
  2. API documentation
  3. API implementations, server
  4. API implementations, client
  5. Material (often “data”) made available via API
  6. API metadata (e.g, as part of API directory)

(depending on construction, these could all be generated from an annotated implementation, or could each be separate works)

and what restrictions can be pertinent:

  1. Copyright
  2. Patent

(many other issues can arise from providing an API as a service, e.g., privacy, though those are usually not in the range of public licenses and are orthogonal to API “IP”, so I’ll ignore them here)

1-4 are clearly works subject to copyright, while 5 and 6 may or may not be (e.g., hopefully not if purely factual data). Typically only 3 and 4 might be restricted by patents.

Standards bodies typically do their work primarily around 1. Relatively open ones, like the W3C, obtain agreement from all contributors to the standard to permit royalty-free implementation of the standard by anyone, typically including a patent license and permission to prepare and perform derivative works (i.e., copyright, to extent such permission is necessary). One option you have is to put your API through an existing standards organization. This may be too heavyweight, or may be appropriate yet if your API is really a multi-stakeholder thing with multiple peer implementations; the W3C now has a lightweight community group venue which might be appropriate. The Open Web Foundation’s agreements allow you to take this approach for your API without involvement of an existing standards body​. Lawrence Rosen has/will talk about this.

Another approach is to release your API specification (and necessarily 2-4 to the extent they comprise one work, ideally even if they are separate) under a public copyright license, such as one of the CC licenses, the CC0 public domain dedication, or an open source software license. Currently the most obvious choice is the Apache License 2.0, which grants copyright permission as well as including a patent peace clause. One or more of the CC licenses are sometimes suggested, perhaps because specification and documentation are often one work, and the latter seems like a “creative” work. But keep in mind that CC does not recommend using its licenses for software, and instead recommends using an open source software licenses (such as Apache): no CC license includes explicit patent permission, and depending on the specific CC license chosen, it may not be compatible with software licenses, contrary to goal of granting clear permission for independent API implementation, even in the face of a bad policy turn.

One way to go beyond mitigating “API copyrightability” is to publish open source implementations, preferably production, though reference implementations are better than nothing. These implementations would be covered by whatever copyright and patent permissions are granted by the license they are released under — again Apache 2.0 is a good choice, and for software implementation CC licenses should not be used; other software licenses such as [A]GPL might be pertinent depending on business and social goals.

Another way to create a “thick” API commons is to address material made available via APIs, and metadata about APIs. There, CC tools are likely pertinent, e.g., use CC0 for data and metadata to ensure that “facts are free”, as they ought be in spite of other bad policy turns.

To get even thicker, consider the architecture, for lack of a better term, around API development, services, and material accessed and updated via APIs. Just some keywords: Linked Open Data, P2P, federation, Lots of Copies Keep Stuff Safe, collaborative curation.

The other panelists were Pamela Samuelson, Lawrence Rosen, and Annette Hurst, moderated by David Berlind.

I’m fairly familiar with Samuelson’s and Rosen’s work and don’t have comments on what they said on the panel. If you want to read more, I recommend among Samuelson’s papers The Strange Odyssey of Software Interfaces and Intellectual Property Law which shows that the “API copyright emergency” of the panel title is recurrent and intertwined with patent, providing several decades of the pertinent history up to 2008. Contrary to my expectation in the notes above, Rosen didn’t get a chance to talk about the Open Web Foundation agreements, but you can read his 2010 article Implementing Open Standards in Open Source which covers OWF.

Hurst is a lawyer for Orrick representing Oracle in the Oracle v. Google case, so understandably advocated for API copyright, but in the process made several deeply flawed assertions could have consumed the entire duration of the panel, but Berlind did a good job of keeping the conversation moving forward. Still, I want to mention two high level ones here, my paraphrases and responses:

Without software copyright the software economy would go away. This is refuted by software development not for the purposes of selling licenses (which is the vast majority of it), especially free/open source software development, and services (e.g., API provision, the source of which is often never published, though it ought be, see “going beyond” recommendations above). Yes the software economy would change, with less winner-take-all monopoly and less employment for Intellectual Parasite lawyers. But the software economy would be huge and very competitive. Software is eating the world, remember? One way to make it help rather than pejoratively eat the world is to eject the parasites along for the ride.

Open source can’t work without software copyright. This is refuted by 1) software source sharing before software copyright; 2) preponderance of permissively licensed open source software, in which the terms do not allow suing downstream developers who do not share back; 3) the difficulty of enforcing copyleft licenses which do allow for suing downstream developers who do not share back; 4) the possibility of non-copyright regulation to force sharing of source (indeed I see the charitable understanding of copyleft as prototyping such regulation; for perspective on the Oracle v. Google case from someone with a more purely charitable understanding of copyleft, see Bradley Kuhn); and 5) demand and supply mechanisms for mandating sharing of source (e.g., procurement policies, distribution policies such as Debian’s).

These came up because Hurst seemed to really want the audience to conflate software copyright in general (not at issue in the case, settled in a bad place since the early 1980s) and API copyright specifically. Regarding the latter, another point which could have been made is the extent to which free/open source software has been built around providing alternatives to proprietary software, often API-compatible. If API copyright could prevent compatible implementation without permission of any sort, open source, competition, and innovation would all be severely hampered.

There is a recent site called API Commons, which seems to be an API directory (Programmable Web, which ran the conference, also has one). My general suggestion to both would be to implement and facilitate putting all elements of APIs listed above in my notes in the commons. For example, they could clarify that API metadata they collect is in the public domain, publish it as Linked Open Data, and encourage API developers and providers they catalog to freely license specifications, documentation, implementations, and data, and note such in the directories.

In order to get a flavor for the conference, I listened to yesterday morning’s keynotes, both of which made valiant attempts to connect big picture themes to day to day API development and provision. Allow me to try to make connections back to “API commons”.

Sarah Austin, representing the San Francisco YMCA, pointed out that the conference is near the Tenderloin neighborhood, the poorest in central San Francisco. Austin asked if kids from the Tenderloin would be able to find jobs in the “API economy” or would they be priced out of the area (many tech companies have moved nearby in the last years, Twitter perhaps the best known).

Keith Axline claimed The Universe Is Programmable. We Need an API for Everything, or to some extent, that learning about the universe and how to manipulate it is like programming. Axline’s talk seemed fairly philosophical, but could be made concrete with reference to the Internet of Things, programmable matter, robots, nanobots, software eating the world … much about the world will indeed soon be software (programmable) or obsolete.

Axline’s conclusion was in effect largely about knowledge policy, including mourning energy wasted on IP, and observing that we should figure out public support for science or risk a programmable world dominated by IP. That might be part of it, but keeps the focus on funding, which is just where IP advocates want it — IP is an off-the-balance-sheets, “free” taking. A more direct approach is needed — get the rules of knowledge policy right, put freedom and equality as its top goals, reject freedom infringing regimes, promote commons (but mandating all these as a condition of public and publicly interested funding is a reasonable starting place) — given these objectives and constraints, then argue about market, government, or other failure and funding.

Knowledge policy can’t directly address the Austin’s concerns in the Tenderloin, but it does indirectly affect them, and over the long term tremendously affect them, in the Tenderloin and many other places. As the world accelerates its transition from an industrial to a knowledge dominated economy, will that economy be dominated by monopoly and inequality or freedom and equality? Will the former concentrations continue to abet instances of what Jane Jacobs called “catastrophic money” rushing into ill-prepared neighborhoods, or will the latter tendencies spread the knowledge, wealth, and opportunity?

How different would the net be without Firefox?

Sunday, April 6th, 2014

David Flanagan, latest making claim I’ve read many times:

Without Mozilla, there would have been no Firefox, and the internet would be very different today.

Mitchell Baker in only a few more words included a combined mechanism and outcome:

We moved the desktop and browsing environments to a much more open place, with far more options and control available to individuals.

Baker further explained Mozilla aims to make an analogous difference in the computing environment of today and the future:

Today we live in a different online era. This era combines desktop, mobile devices, cloud services, big data and a social layer. It is feature-rich, highly centralized, and focused on a few giant organizations that exert control over almost all aspects of the experience. Today’s computing environment is deeply in need of an open, exciting alternative that shows what the Open Web brings to this setting — something built on parts including Firefox OS, WebGL, asm.js, and the many other innovations being developed at Mozilla. It is comparable to the desktop computing environment we set out to revolutionize when we started Mozilla.

Mozilla needs to bring a similar scope of change to the new computing era. Once again, Mozilla needs to break down the walled gardens of online life and bring openness and opportunity to all. Once again, we have the chance to build products and communities in a way that no one else will.

(Baker’s post announced Brendan Eich as CEO, Flanagan lays out some information following Eich’s resignation. That crisis presumably changed nothing about evaluations of Mozilla’s previous impact, nor its plans for analogous future impact. The crisis just provided an opportunity for many to repeat such evaluations and plans. This post is my idiosyncratic exploitation of the opportunity.)

Those are important claims and plans, and I tend to strongly agree with them. My logic, in brief:

  • there’s a lot of scope for the net (and society at large) to be substantially more or less “open” than it is or might be due to relatively small knowledge policy and knowledge economy changes;
  • there’s a lot of scope for commons-based projects to push the knowledge economy (and largely as an effect, knowledge policy) in the direction of “open”;
  • due to network effects and economies of scale, huge commons-based projects are needed to realize this potential for pushing society in an “open” direction;
  • Mozilla is one of a small number of such huge commons-based projects, and its main products have and will be in positions with lots of leverage.

Independent of my logic (which of course I doubt and welcome criticism of) for agreeing with them, I think claims about Mozilla’s past and potential future impact are important enough to be criticized and refined rather than suffering the unremitting bludgeoning of obscurity or triviality.

How could one begin to evaluate how much and what sort of difference Mozilla, primarily through Firefox, has made? Some things to look at:

  • other free/open source software browser projects;
  • competition among proprietary browsers;
  • differences between Firefox and proprietary browsers in developing and implementing web standards;
  • all aspects of Mozilla performance vs. comparable (Mozilla is different in many respects, but surely amenable to many tools of organizational evaluation and comparison) organizations;
  • 2nd order effects of a superior (for a period, and competitive otherwise) free/open source browser, e.g., viability of free desktop (though never achieving significant market share, must be responsible for huge increases in consumer surplus due through constraint on proprietary pricing and behavior) and inspiration for other open source projects, demonstration of feasibility of commons-based competition in mass market.

It’s possible that such questions are inadequate for characterizing the impact of Mozilla, but surely they would help inform such characterization. If those are the wrong questions, or the wrong sort of questions, what are the right ones? Has anyone, in any field, taken evaluation of Mozilla’s differential impact beyond the Baker quote above? I’d love to read about how the net would have been different without Firefox, and how we might expect the success or failure of new Mozilla initiatives to produce different worlds.

These kinds of questions are also important (or at a minimum, interesting to me) for other commons-based initiatives, e.g., Wikimedia and Creative Commons.

Counter-donate in support of marriage equality and other Mozilla-related notes

Saturday, March 29th, 2014

I’m a huge fan of Mozilla and think their work translates directly into more human rights and equality. So like many other people, I find it pretty disturbing that their new CEO, Brendan Eich, donated US$1000 in support of banning same sex marriage. True, this is scrutiny beyond which most organizations’ leaders would receive, and Mozilla in deed seems to have excellent support for LGBT employees, endorsed by Eich, and works to make all welcome in the Mozilla community. But I think Evan Prodromou put it well:

If you lead an organization dedicated to human rights, you need to be a defender of human rights.

Maybe Eich will change his mind. Perhaps he believes an ancient text attributed to an ultra powerful being commands him to oppose same sex marriage. Believers have come around to support all kinds of liberal values and practices in spite of such texts. Perhaps he considers marriage an illegitimate institution and would prefer equality arrive through resetting marriage to civil unions for all, or something more radical. I can comprehend this position, but it isn’t happening this generation, and is no excuse for delaying what equality can be gained now.

Freedom to Marry logoIn the meantime one thing that Mozilla supporters might do to counter Eich’s support for banning same sex marriage, short of demanding he step down (my suspicion is that apart from this he’s the best person for the job; given what the mobile industry is, someone from there would likely be a threat to the Mozilla mission) is to “match” it in kind, with counter-donations to organizations supporting equal rights for LGBT people.

Freedom to Marry seems to be the most directly counter to Eich’s donation, so that’s what I donated to. The Human Rights Campaign is probably the largest organization. There are many more in the U.S. and around the world. Perhaps Eich could counter his own donation with one to an organization working on more basic rights where homosexuality is criminalized (of course once that is taken care of, they’ll demand the right to marry too).

Other Mozilla-related notes that I may otherwise never get around to blogging:

  • Ads in new tabs (“directory tiles”) have the potential to be very good. More resources for Mozilla would be good, “diversification” or not. Mozilla’s pro-user stance ought make their design and sales push advertisers in the direction of signaling trustworthiness, and away from the premature optimization of door-to-door sales. They should hire Don Marti, or at least read his blog. But the announcement of ads in new tabs was needlessly unclear.
  • Persona/BrowserID is brilliant, and with wide adoption would make the web a better place and further the open web. I’m disappointed Mozilla never built it into Firefox, and has stopped paying for development, handing it over to the community. But I still hold out some hope. Mozilla will continue to provide infrastructure indefinitely. Thunderbird seems to have done OK as a community development/Mozilla infrastructure project. And the problem still needs to be solved!
  • Contrary to just about everyone’s opinions it seems, I don’t think Mozilla’s revenue being overwhelmingly from Google is a threat, a paradox, or ironic. The default search setting would be valuable without Google. Just not nearly as valuable, because Google is much better at search and search ads than its nearest competitors. Mozilla has demonstrated with FirefoxOS that they’re willing to compete directly with Google in a hugely valuable market (mobile operating systems, against Android). I have zero inside knowledge, but I’d bet that Mozilla would jump at the chance to compete with Google on search or ads, if they came upon an approach which could reasonably be expected to be superior to Google’s offerings in some significant ways (to repeat, unlike Google’s nearest search and ads competitors today). Of course Mozilla is working on an ads product (first item), leveraging Firefox real estate rather than starting two more enormous projects (search and search ads; FirefoxOS must be enough for now).
  • The world needs a safe systems programming language. There have been and are many efforts, but Mozilla-developed Rust seems to have by far the most promise. Go Rust!
  • Li Gong of Mozilla Taiwan and Mozilla China was announced as Mozilla’s new COO at the same time Eich was made CEO. I don’t think this has been widely noted. My friend Jon Phillips has been telling me for years that Li Gong is the up and coming power. I guess that’s right.

I’m going to continue to use Firefox as my main browser, I’ll probably get a FirefoxOS phone soon, and I hope Mozilla makes billions with ads in new tabs. As I wrote this post Mozilla announced it supports marriage equality as an organization (even if the CEO doesn’t). Still, make your counter-donation.

WWW next 25: Universal, Secure, Resilient?

Wednesday, March 12th, 2014

Today folks seem to be celebrating the 25th anniversary of a 1989 proposal for what is now the web — implementation released to the public in August, 1991.

Q&A with web inventor Timothy Berners-Lee: 25 years on, the Web still needs work.

The web is pretty great, much better than easily imagined alternatives. Three broad categories it could improve in:

  • Universality. All humans should be able to access the web, and this should be taken to include being able to publish, collaborate, do business, and run software on the web, in any manner, in any language or other interface. Presently, billions aren’t on the net at all, activity outside of a handful of large services is very expensive (in money, expertise, or marketing), and machine translation and accessibility are very limited.
  • Security. All of the above, securely, without having to understand anything technical about security, and with lots of technical and cultural guards against technical and non-technical attacks of all kinds.
  • Resilience. All of the above, with minimal interruption and maximal recovery from disaster, from individual to planetary scale.

Three pet outcomes I wish for:

  • Collective wisdom. The web helps make better decisions, at all scales.
  • Commons dominance. Most top sites are free-as-in-freedom. Presently, only Wikipedia (#5) is.
  • Freedom, equality, etc.

Two quotes from the Berners-Lee Q&A that are on the right track:

Getting a nice user interface to a secure system is the art of the century.

Copyright law is terrible.

Gov[ernance]Lab impressions

Friday, March 7th, 2014

First, two excerpts of my previous posts to explain my rationale for this one. 10 months ago:

I wonder the extent to which reform of any institution, dominant or otherwise, away from capture and enclosure, toward the benefit and participation of all its constituents, might be characterized as commoning?

Whatever the scope of commoning, we don’t know how to do it very well. How to provision and govern resources, even knowledge, without exclusivity and control, can boggle the mind. I suspect there is tremendous room to increase the freedom and equality of all humans through learning-by-doing (and researching) more activities in a commons-orientated way. One might say our lack of knowledge about the commons is a tragedy.

26:

Other than envious destruction of power (the relevant definition and causes of which being tenuous, making effective action much harder) and gradual construction of alternatives, how can one be a democrat? I suspect more accurate information and more randomness are important — I’ll sometimes express this very specifically as enthusiasm for futarchy and sortition — but I’m also interested in whatever small increases in accurate information and randomness might be feasible, at every scale and granularity — global governance to small organizations, event probabilities to empirically validated practices.

I read about the Governance Lab @ NYU (GovLab) in a forward of a press release:

Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people.

That sounded interesting, perhaps not deceivingly — as I browsed the site, open tabs accumulated. Notes on some of those follow.

GovLab’s hypothesis:

When institutions open themselves to diverse participation and collaborative problem solving, they become more effective and the decisions they make are more legitimate.

I like this coupling of effectiveness and legitimacy. Another way of saying politics isn’t about policy is that governance isn’t about effectiveness, but about legitimizing power. I used to scoff at the concept of legitimacy, and my mind still boggles at arrangements passing as “legitimate” that enable mass murder, torture, and incarceration. But our arrangements are incredibly path dependent and hard to improve; now I try to charitably consider legitimacy a very useful shorthand for arrangements that have some widely understood and accepted level of effectiveness. Somewhat less charitably: at least they’ve survived, and one can do a lot worse than copying survivors. Arrangements based on open and diverse participation and collaborative problem solving are hard to legitimate: not only do they undermine what legitimacy is often really about, it is hard to see how they can work in theory or practice, relative to hierarchical command and control. Explicitly tackling effectiveness and legitimacy separately and together might be more useful than assuming one implies the other, or ignoring one of them. Refutation of the hypothesis would also be useful: many people could refocus on increasing the effectiveness and legitimacy of hierarchical, closed systems.

If We Only Knew:

What are the essential questions that if answered could help accelerate the transformation of how we solve public problems and provide for public goods?

The list of questions isn’t that impressive, but not bad either. The idea that such a list should be articulated is great. Every project ought maintain such a list of essential questions pertinent to the project’s ends!

Proposal 13 for ICANN: Provide an Adjudication Function by Establishing “Citizen” Juries (emphasis in original):

As one means to enhance accountability – through greater engagement with the global public during decision-making and through increased oversight of ICANN officials after the fact – ICANN could pilot the use of randomly assigned small public groups of individuals to whom staff and volunteer officials would be required to report over a given time period (i.e. “citizen” juries). The Panel proposes citizen juries rather than a court system, namely because these juries are lightweight, highly democratic and require limited bureaucracy. It is not to the exclusion of other proposals for adjudicatory mechanisms.

Anyone interested in random selection and juries has to be at least a little interesting, and on the right track. Or so I’ve thought since hearing about the idea of science courts and whatever my first encounter with sortition advocacy was (forgotten, but see most recent), both long ago.

Quote in a quote:

“The largest factor in predicting group intelligence was the equality of conversational turn-taking.”

What does that say about:

  • Mailing lists and similar fora used by projects and organizations, often dominated by loudmouths (to say nothing of meetings dominated by high-status talkers);
  • Mass media, including social media dominated by power law winners?

Surely it isn’t pretty for the intelligence of relevant groups. But perhaps impetus to actually implement measures often discussed when a forum gets out of control (e.g., volume or flamewars) such as automated throttling, among many other things. On the bright side, there could be lots of low hanging fruit. On the dim side, I’m surely making extrapolations (second bullet especially) unsupported by research I haven’t read!

Coordinating the Commons: Diversity & Dynamics in Open Collaborations, excerpt from a dissertation:

Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism. [...] To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia.

Interesting for a few reasons:

  • I like the title, cf. commons coordination (though I was primarily thinking of inter-project/movement coordination);
  • OpenHatchy;
  • I like the further inquiry’s usefulness for research and the researched community;
  • Improving the effectiveness of mass collaboration is important, including for its policy effects.

Back to the press release:

Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.

I hope those technology platforms will be open to audit and improvement by the public, i.e., free/open source software. GovLab’s site being under an open license (CC-BY-SA) could be a small positive indicator (perhaps not rising to the level of an essential question for anyone, but I do wonder how release and use of “content” or “data” under an open license correlates with release and use of open source software, if there’s causality in either direction, and if there could be interventions that would usefully reinforce any such).

I’m glad that NGOs are a target. Seems it ought be easier to adopt and spread governance innovation among NGOs (and businesses) than among governments, if only because there’s more turnover. But I’m not impressed. I imagine this could be due, among other things, to my ignorance: perhaps over a reasonable time period non-state governance has improved more rapidly than state governance, or to non-state governance being even less about effectiveness and more about power than is state governance, or to governance being really unimportant for survival, thus a random walk.

Something related I’ll never get around to blogging separately: the 2 year old New Ambiguity of ‘Open Government’ (summary), concerning the danger of allowing term to denote a government that publishes data, even merely politically insensitive data around service provision, rather than politically sensitive transparency and ability to demand accountability. I agree about the danger. The authors recommend maintaining distinctions between accountability, service provision, and adaptability of data. I find these distinctions aren’t often made explicit, and perhaps they shouldn’t be: it’d be a pain. But on the activist side, I think most really are pushing for politically sensitive transparency (and some focused on data about service provision might fairly argue such is often deeply political); certainly none want open data to be a means of openwashing. For one data point, I recommend the Oakland chapter of Beyond Transparency. Finally, Stop Secret Contracts seems like a new campaign entirely oriented toward politically sensitive transparency and accountability rather than data release. I hope they get beyond petitions, but I signed.

Unlock federated MediaGoblin hosting revolution game

Saturday, March 1st, 2014

About 16 months after raising $42k to feed the programmers (my post about that campaign), the MediaGoblin team is asking again, with promised features dependent on the total amount raised.

I’m pretty excited about three features. First, at $35k:

Federation: Connect and share with friends and family even if you’re on different MediaGoblin sites! We’ll be adding federation support via the Pump API.

Mostly because this would be a boost to the so far disappointing and fractured federated social web.

Second:

[UNLOCK] Premium hosting reward! If we hit 60k, we’ll add a new reward option: premium hosting!

Doesn’t federation make hosting superfluous? Everyone should run their own server, right? No, those are extremely delusional or elitist claims. I don’t want to run my own server, nor do 7 billion others. Federation (preferably in conjunction with free software, data and identifier portability) enables interoperation and competition among individual-, community-, and commercially-run services. At this stage there seem to be very significant economies of scale (inclusive of marketing!) in running servers. Hopefully someone (the developers would be natural) will realize the necessity of mass hosting of federated services for federation to win.

Third:

[statement] After watching the new MediaGoblin video, i want to play their video game.

[response] I’ve joked about putting a goblin video game as a 500k feature unlock

Here I just wanted to point out how much of MediaGoblin lead developer Christopher Webber’s personality and vision is in the campaign video, assets, and overall scheme. That vision goes pretty far beyond federated media hosting. Free games and art are part of it. But a MediaGoblin game would be a great marketing tie-in solely for the goal of promoting MediaGoblin. I hope this happens; $500k this campaign would be great, but under other circumstances if not.

Defensive Patent License 1.0 birthday

Saturday, November 16th, 2013

Defensive Patent License version 1.0 turned 0 yesterday. The Internet Archive held a small celebration. The FAQ says the license may be used now:

Sign up and start using the DPL by emailing defensivepatent@gmail.com.

There will be a launch conference 2014-02-2811-07 in Berkeley: gratis registration. By that time I gather there should be a list of launch DPL users, a website for registering and tracking DPL users, and a non-profit organization to steward the license, for which the Internet Archive will serve as a 501(c)3 fiscal sponsor.

Loosely organized thoughts follow. But in short:

  • DPL users grant a royalty free license (except for the purpose of cloning products) for their entire patent portfolio, to all other DPL users. This grant is irrevocable, unless the licensee (another DPL user) withdraws from the DPL or initiates patent litigation against any DPL user — but note that the withdrawing or aggressing entity’s grant of patents to date to all other DPL users remains in force forever.
  • Participation is on an entity basis, i.e., a DPL user is an organization or individual. All patents held or gained while a DPL user are included. But the irrevocable license to other DPL users then travels with individual patents, even when transferred to a non-DPL user entity.
  • An entity doesn’t need any patents to become a DPL user.
  • DPL doesn’t replace or conflict with patent peace provisions in modern free/open source licenses (e.g., Apache2, GPLv3, MPL2); it’s a different, complementary approach.
  • It may take years for the pool of DPL users’ patents to be significant enough to gain strong network effects and become a no-brainer for businesses in some industries to join. It may never. But it seems possible, and well worth trying.
  • Immediately, DPL seems like something for organizations that want to make a strong commitment, but a narrow one (only to others making the commitment), to patent non-aggression, ought to get on board with. Entities that want to make a broader commitment, including those that have already made complementary commitments through free/open source licenses or non-aggression pledges for certain uses (e.g., implementing a standard), should also get on board.

History

Last year I’d read Protecting Open Innovation: The Defensive Patent License as a New Approach to Patent Threats, Transaction Costs, and Tactical Disarmament (by Jennifer Urban and Jason Schultz, also main authors of the DPL 1.0) with interest and skepticism, and sent some small comments to the authors. The DPL 1.0, available for use now, incorporates some changes suggested in A Response to a Proposal for a Defensive Patent License (DPL) (and probably elsewhere; quite a few people worked on the license). Both papers are pretty good reads for understanding the idea and some of the choices made in DPL 1.0.

Two new things I learned yesterday are that the DPL was Internet Archive founder Brewster Kahle’s idea, and work on the license started in 2009. Kahle had been disturbed that patents with his name on them that he had been told were obtained for defensive purposes while an engineer at Thinking Machines, were later used offensively by an entity that had acquired the patents. This made him wonder if there could be a way for an entity to commit to using patents only defensively. Kahle acknowledged that others have had similar ideas, but the DPL is now born, and it just may be the idea that works.

(No specific previous ideas were mentioned, but a recent one that comes to mind is Paul Graham’s 2011 suggestion of a pledge to not initiate patent litigation against organizations with fewer that 25 employees. Intentionally imprecise, not legally binding, and offering no benefit other than appearing on a web page, probably not surprising it didn’t take off. Another is Twitter’s Innovator’s Patent Agreement (2012), in which a company promises an employee to seek their permission for any non-defensive uses of patents in the employee’s name; unclear uptake. Additional concepts are covered at End Soft Patents.)

Kahle, Urban, and Schultz acknowledged inspiration from the private ordering/carving out of free spaces (for what Urban and Schulz call “open innovation communities” to practice) through public licenses such as the GPL and various Creative Commons licenses. But the DPL is rather different in a few big ways (and details which fall out of these):

  1. Subject of grant: patent vs. copyright
  2. Scope of grant: all subject rights controlled by an entity vs individual things (patents or works subject to copyright)
  3. Offered to: club participants vs. general public

I guess there will be a tendency to assume the second and third follow strictly from the first. I’m not so sure — I can imagine free/open source software and/or free culture/open content/data worlds which took the entity and club paths (still occasionally suggested) — and I think the assumption would under-appreciate the creativity of the DPL.

DPL and free/open source software

The DPL is not replacement for patent clauses in free/open source licenses, which are conditions of public copyright licenses with different subject, scope, and audience (see previous). Additionally, the DPL’s non-grant for cloning products, which I do not understand the scope of, probably further reduces any overlap between modern FLOSS license patent provisions and the DPL that may exist. But, I see no conflict, and some complementarity.

A curiosity would be DPL users releasing software under free software licenses without patent provisions, or even with explicit patent non-grants, like CC0. A complementary curiosity would be free/source projects which only accept contributions from DPL users. Yet another would be a new software license only granting copyright permissions to DPL users (this would almost certainly not be considered free/open source), or releasing DPL users from some license conditions (this could be done as an exception to an existing license).

The DPL isn’t going to directly solve any patent problems faced by free/open source software (e.g., encumbered ‘standards’) any time soon. But, to the extent the DPL decreases the private value (expected rents) of patents and encourages more entities to not see patents as useful for collecting rents, this ought push the problems faced away, just a bit. Even if software patents were to evaporate tomorrow (as they should!), users of free/open source software would encounter patents impacted all sorts of devices running said software; patents would still be a problem for software freedom.

I hope that many free/open source software entities become DPL users, for the possible slowly accruing benefits above, but also to make common cause with others fighting for (or reforming slightly towards) intellectual freedom. Participation in broader discourse by free/open source software entities is a must, for the health of free software, and the health of free societies.

End Soft Patents’ entry on the DPL will probably be a good place to check years hence on how the DPL is viewed from the perspective of free/open source software.

DPL “enforcement”

In one sense, the DPL requires no enforcement — it is a grant of permission, which one either takes or not by also becoming a DPL user. But, although it contains provisions to limit obvious gaming, if it becomes significant, doubtless some entities will try to push its boundaries, perhaps by obfuscating patent ownership, or interpreting “cloning” expansively. Or, the ability to leave with 180 days notice could prove to be a gaping hole, with entities taking advantage of the pool until they are ready to file a bunch of patents. Or, the lack of immediate termination of licenses from all DPL users and the costliness of litigation may mean the DPL pool does little to restrain DPL users from leaving, or worse, initiating litigation (or threatening to do so, or some other extortion) against other DPL users.

Perhaps the DPL Foundation with a public database of DPL users will play a strong coordinating function, facilitating uncovering obfuscated ownership, disseminating notice of bad behavior, and revocation of licenses to litigators and leavers.

DPL copyleft?

In any discussion of X remotely similar to free/open source software, the question of “what is copyleft for X?” comes up — and one of the birthday presenters mentioned that the name DPL is a hat tip to the GPL — is the DPL “copyleft for patents”?

It does have reciprocality — only DPL users get DPL grants from other DPL users. I will be surprised if at some point someone doesn’t pejoratively say the DPL is “viral” — because the license to DPL users stays with patents even if they are transferred to a non-DPL user entity. A hereditary effect more directly analogous to the GPL might involve a grant conditioned on an licensee’s other patents which read on the licensed patent being similarly licensed, but this seems ineffective at first blush (and has been thought of and discarded innumerable times).

The DPL doesn’t have a regulatory side. Forced revelation, directly analogous to the GPL’s primary regulatory side, would be the obvious thing to investigate for a DPL flavor, but the most naive requirement (entity must reveal all patentable inventions in order to remain a DPL user in good standing) would be nearly impossible to comply with, or enforce. It may be more feasible to require revelation of designs and documentation for products or services (presumably source code, for software) that read on any patents in the DPL pool. This would constitute a huge compliance and enforcement challenge, and probably very difficult to bootstrap a significant pool, but would be an extremely interesting regulatory experiment if it gained any traction.

DPL “Troll-proof”?

The slogan must be taken with a mountain of salt. Still, the DPL, if widely adopted, would mitigate the troll problem. Because grants to DPL users are irrevocable, and follow a patent upon changes of ownership, any patent with a grant to DPL users will be less valuable for a troll to acquire, because there are fewer entities for the troll to sue. To the extent DPL adoption reduces patenting in an industry, or overall, there will be less ammunition available for trolls to buy and use to hold anyone up. In the extreme of success, all practicing entities become DPL users. Over a couple decades, the swamp is drained.

Patents are still bad

The only worrisome thing I heard yesterday (and I may have missed some nuance) was the idea that it is unfortunate that many engineers, and participants in open innovation communities in particular, see patents as unethical, and that as free/open source software people learned to use public copyright licenses (software was not subject to copyright until 30-40 years ago), they and others should learn to use appropriate patent tools, i.e., the DPL.

First, the engagement of what has become free/open source software, open access, open data, etc., with copyright tools, has not gone swimmingly. Yes, much success is apparent, but compared to what? The costs beg to be analyzed: isolation, conservatism, internal fighting, gaming of tools used, disengagement from policy and boundary-pushing, reduction (and stunting) of ethics to license choice. My ideal, as hinted above, would be for engagement with the DPL to help open innovation communities escape this trap, rather than adding to its weight.

Second, in part because extreme “drain the swamp” level of success is almost certainly not going to be achieved, abolition (of software patents) is the only solution. And beyond software, the whole system should be axed. Of course this means not merely defending innovators, including open innovation communities, from some expense and litigation, but moving freedom and equality to the top of our innovation policy ordering.

DPL open infrastructure?

I hope, in part to make the DPL attractive to existing open innovation communities, I really hope the DPL Foundation will make everything it does free and open with traditional public copyright and publishing tools;

  • Open content: the website and all documentation ought be licensed under CC0 (though CC-BY or CC-BY-SA would be acceptable).
  • Open source/open service: source code of the eventual website, including applications for tracking DPL users, should be developed in a public repository, and licensed under either Apache2 or AGPLv3 (latter if the Foundation wishes to force those using the software elsewhere to reveal their modifications).
  • Open data: all data concerning DPL users, licensed patents, etc., should be machine-readable, downloadable in bulk, and released under CC0.

DPL readability

I found the DPL surprisingly brief and readable. My naive guess, given a description of how it works, would have been something far longer and more inscrutable. But the DPL actually compares to public licenses very favorably on automated readability metrics. Table below shows these for DPL 1.0 and some well known public copyright licenses (lower numbers indicate better readability, except in the case of Flesch; Chars/(Flesch>=1) is my gross metric for how painful it is to read a document; see license automated readability metrics for an explanation):

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8ffe2c5c25b85e52f42fcde68c2cf6a88b7abd69 Apache-2.0 8310 16.8 19.8 15.1 20.7 64.6 16.6 33.6 247
20dc61b94cfe1f4ba5814b340095b4c3fa23e801 CC-BY-3.0 14956 16.1 19.4 14.1 20.4 66.1 16.2 40.0 373
bbf850220781d9423be9e478fbc07098bfd2b5ad DPL-1.0 8256 15.1 18.9 15.7 18.4 65.9 15.0 40.6 203
0473f7b5cf37740d7170f29232a0bd088d0b16f0 GPL-2.0 13664 13.3 16.2 12.5 16.2 57.0 12.7 52.9 258
d4ec7d0b46077b89870c66cb829457041cd03e8d GPL-3.0 27588 13.7 16.0 13.3 16.8 57.5 13.8 47.2 584
78fe0ed5d283fd1df26be9b4afe8a82124624180 MPL-2.0 11766 14.7 16.9 14.5 17.9 60.5 14.9 40.1 293

Automated readability metrics are probably at best an indicator for license drafters, but offer no guidance on actually improving readability. Last month Luis Villa (incidentally, on the DPL’s advisory board) reviewed a manual of style for contract drafting by editing Twitter’s Innovator’s Patent Agreement per the manual’s advice. I enjoyed Villa’s post, but have not attempted to discern (and discernment may be beyond my capability) how closely DPL 1.0 follows the manual’s advice. By the way, Villa’s edit of the IPA per the manual did improve its automated readability metrics:

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8774cfcefbc3b008188efc141256b0a8dbe89296 IPA 4778 19.6 24.0 15.5 22.7 75.8 17.0 27.1 176
b7a39883743c7b1738aca355c217d1d14c511de6 IPA-MSCD 4665 17.4 21.2 15.6 20.4 70.2 16.0 32.8 142

Net

Go back to the top, read the DPL, get your and other entities in the queue to be DPL users at its launch! Or, explain to me why this is a bad idea.