Post Open Services

Hello World Intellectual Freedom Organization

Saturday, April 25th, 2015

Today I’m soft launching an initiative that I’ve been thinking about for 20 years, obtained a domain name for in 1998, blogged about once in 2004, and the last few years have been exploring on this blog without naming it. See the first items in my annual thematic doubt posts for 2013 and 2014: “protecting and promoting intellectual freedom, in particular through the mechanisms of free/open/knowledge commons movements, and in reframing information and innovation policy with freedom and equality outcomes as top.”

I call it the World Intellectual Freedom Organization (WIFO).

Read about its theory, why a new organization, proposed activities, and how you can help/get involved.

Why today? Because April 26 is World Intellectual Freedom Day, occupying and displacing World Intellectual Property Day, just as intellectual freedom must occupy and displace intellectual property for a good future. Consider this 0th World Intellectual Freedom Day another small step forward, following last year’s Without Intellectual Property Day.

Why a soft launch? Because I’m eager to be public about WIFO, but there’s tons of work to do before it can properly be considered launched. I’ve been getting feedback from a handful of people on a quasi-open fellowship proposal for WIFO (that’s where the activities link above points to) and apologize to the many other people I should’ve reached out to. Well, now I’m doing that. I want your help in this project of world liberation!

Video version of my proposal at the Internet Archive or YouTube. My eyes do not lie, I am reading in an attempt to fit too much material in 5 minutes.

I’ll probably blog much less here about “IP” and commons/free/libre/open issues here from now on, especially after opening a WIFO blog (for now there’s a Discourse forum; most of the links above point there). Not to worry, I am overflowing with idiosyncratic takes on everything else, and will continue to post accordingly here, as much as time permits. ☻

Be sure to celebrate the 0th World Intellectual Freedom Day, even if only momentarily and with your lizard brain.

2 great Document Freedom Day announcements

Thursday, March 26th, 2015

Yesterday (March 25) was again Document Freedom Day, a celebration of open standards. Rather than my usual critical cheering, this year I took to adding all of my pertinent posts to a free/libre/open formats/protocols/standards category and want to highlight two exciting announcements:

(1) IETF NetVC BoF notes, slides:

Goals for the Proposed WG

  • Development of a video codec that is:
    • Optimized for real-time communications over the public Internet
    • Competitive with or superior to existing modern codecs
    • Viewed as having IPR licensing terms that allow for wide implementation and deployment
    • Developed under the IPR rules in BCP 78 (RFC 5378) and BCP 79 (RFCs 3979 and 4879)
  • Replicate the success of the CODEC WG in producing the Opus audio codec.

For more on why this is exciting, see Opus! and “I would love it if all patents evaporated” (WebRTC). Appropriately, yesterday also brought another blog-like post (discussion) on the development of the Daala codec, which could form the basis of the hoped-for IETF standard.

(2) LibreOffice Online is coming. If successful it will fill a major gap in the free software product line. I worried about this gap the last time I congratulated LibreOffice on another release.

wikidata4research4all

Sunday, December 21st, 2014

Recently I’ve uncritically cheered for Wikidata as “rapidly fulfilling” hopes to “turn the universal encyclopedia into the universal database while simultaneously improving the quality of the encyclopedia.” In April I uncritically cheered for Daniel Mietchen’s open proposal for research on opening research proposals.

Let’s combine the two: an open proposal for work toward establishing Wikidata (including its community, data, ontologies, practices, software, and external tools) as a “collaborative hub around research data” responding to a European Commission call on e-infrastructures. That would be Wikidata for Research (WD4R), instigated by Mietchen, who has already assembled an impressive set of partner institutions and an outline of work packages. The proposal is being drafted in public (you can help) and will be submitted January 14.

4all

The proposal will be strong on its own merits, and very well aligned with the stated desired outcomes from the EC call, and the open proposal dogfood angle is also great. I added for all to this post’s title because I suspect WD4R will be a great for pushing Wikidata toward realizing aforementioned “universal database” hopes (which again means not just the data, but community, tools, etc.; “virtual research environment” is one catch-all term) and will make Wikidata much more useful “research” most broadly construed (e.g., by students, journalists, knowledge workers, anyone), potentially much faster than would happen otherwise.

My suspicion has two bases (please correct me if I’m wrong about either):

  1. A database or virtual environment “for research” might give the impression of someplace to dump data from or perform experiments. Maybe that would be appropriate for Wikidata in some instance, but the overwhelming research-supporting use would seem to be mass collaboration in consolidating, annotating, and correcting data and ontologies which many researchers (and researchers-broadly-construed, everyone) can benefit from, either querying or referencing directly, or extracting and using elsewhere. The pre-existing Gene Wiki project which is beginning to use Wikidata is an example of such useful-to-all collections (as referenced in the WD4R pages).
  2. One of the proposed work packages is to identify and work on features needed for research but not on, or not prioritized on, the Wikidata development plan. I suspect other Wikimedia projects can tremendously benefit from Wikidata integration without Wikidata itself or external tools supporting complex queries and reporting that would be called for by a virtual research environment — and also called for to realize “universal database” hopes. Wikidata’s existing plan looks good to me; here I’m just saying WD4R might help it be even better, faster.

The previously linked Gene Wiki post includes:

For more than a decade many different groups have proposed and many have implemented solutions to this challenge using standards and techniques from the Semantic Web. Yet, today, the vast majority of biological data is still accessed from individual databases such as Entrez Gene that make no attempt to use any component of the Semantic Web or to otherwise participate in the Linked Open Data movement. With a few notable exceptions, the data silos have only gotten larger and problems of fragmentation worse.
[…]
Now, we are working to see if Wikidata can be the bridge between the open community-driven power of Wikipedia and the structured world of semantic data integration. Can the presence of that edit button on a centralized knowledge base associated with Wikipedia help the semantic web break through into everyday use within our community?

I agree that massive centralized commons-oriented resources are needed for decentralization to progress (link analogous but not the same — linked open data : federation :: data silos : messaging silos).

Check out Mietchen’s latest WD4R blog post and the WD4R project page.

prioritize(projects, freedom_for_all_computer_users)

Monday, December 8th, 2014

Last week the Free Software Foundation published its annual appeal, which includes the following:

In another 30 years, we believe that we can achieve our goal. We believe that free software can be everywhere, and that proprietary software can go the way of the dinosaur. With the experience we’ve gained, and our community surrounding us, we can win this.

My immediate reaction: I’d love to see the last sentence expanded. How exactly?

Sadly I do not live in a world that laughs at any fundraising appeal lacking an explicit theory of change and only esteems those that one can bet on. At least the FSF has a goal. Perhaps its surrounding community can figure out what it will take to achieve that goal.

Helping “the FSF stay strong for 30 more years” is plainly insufficient, though of course I hope the FSF does stay strong for decades and encourage helping financially. The entire free software movement on its current trajectory is insufficient; some of its staunchest advocates predict a “dark ages” of software freedom (e.g., Bradley Kuhn, Stefano Zacchiroli).

Since 2005 the FSF has published a list of high priority free software projects in order “to foster work on projects that are important for increasing the adoption and use of free software and free software operating systems.”

Today the FSF announced a review of this list. Excerpt:

Undoubtedly there are thousands of free software projects that are high priority, each having potential to displace non-free programs for many users, substantially increasing the freedom of those users. But the potential value of a list of High Priority Free Software Projects maintained by the Free Software Foundation is its ability to bring attention to a relatively small number of projects of great strategic importance to the goal of freedom for all computer users.

[…]

Keep in mind that not every project of great strategic importance to the goal of freedom for all computer users will be a software development project. If you believe other forms of activism, internal or external (e.g., making free software communities safe for diverse participants, mandating use of free software in the public sector), are most crucial, please make the case and suggest such a project!

I hope the announcement text indicates the possibility of exploiting the review and list to encourage debate about how to achieve the FSF’s goal of software freedom for all over the next decades, and that the how might (must, in my view) go far beyond hacking of code (and secondarily, copyright). How can demand for software freedom be both increased and made more effective? Same for supply, inclusive of distribution and marketing?

Send your suggestions to hpp-feedback@gnu.org or better yet post publicly. (I’m on the review committee.)

Because it is undoubtedly out of scope for above activity, I’ll note here a project I consider necessary for FSF’s goal to become plausible: question software freedom.

The “dark ages” links above largely concern “the cloud”, the topic of the other FSF-related committee I’ve participated in, over 6 years ago, correctly implying that effort was not very influential. I hope to post an assessment and summary of my current take on the topic in the near future.

Snowdrift

Sunday, November 30th, 2014

Co-founders David Thomas and Aaron Wolf (the Woz and Jobs of the project) have been working on Snowdrift.coop for at least 2 years (project announcement thread). I’ve been following their progress since, and occasionally offered advice (including on the linked thread).

Snowdrift is crowdfunding platform for ongoing (as opposed to one-off) funding, with scaled (as opposed to thresholded or unqualified) contributions, exclusively for free/open/libre (as opposed to unconditioned, mostly non-open) outputs. These features raise my interest:

  • I’ve been eager to see more nuanced crowdfunding arrangements tried since before relatively simple one-off threshold systems became popular — probably in part due to their simplicity. Snowdrift’s mechanism is both interesting, and has been criticized (see linked thread) for its complexity. It’ll be fun to see it tried out, and simplified, or even made more complex, as warranted.
  • If Snowdrift were to become a dominant platform for funding free/libre/open projects, scaling (contributors increase their contributions as more people contribute) could help create clear winners among the proliferation of such projects.
  • Today’s crowdfunding platforms were influenced (by now, mostly indirectly) by Kelsey and Schneier’s “Street Performer Protocol” paper, which set out to devise an alternative funding system for public domain works. But most crowdfunded works are not in the commons, indicating an need for better coordination of street patrons.

Snowdrift has additional interesting features, including organization as a cooperative, an honor code that goes beyond free/libre/open requirements, and being developed in the programming language Haskell. I’ve barely mentioned these things in the past, but they’re all interesting — alternative institutional arrangements, post-software-freedom, safety. The Snowdrift wiki has pages covering many of these topics and more in depth. They’ve also generally chosen to develop an integrated platform rather than to use existing software (e.g., for wiki, discussion, issues, mailing list) except for revision control hosting. Clearly Snowdrift is not trying to innovate in only one dimension.

Now, Snowdrift is doing a “traditional” one-off crowdfunding drive in order to get itself to production, such that the project and other free/libre/open projects can be funded on an ongoing fashion using the Snowdrift platform and mechanism.

Donate, share, and critique if you’re a fan of interesting mechanisms and freedom.

Ubuntu Ten

Thursday, October 23rd, 2014

Retrospective on 10 years of Ubuntu (LWN discussion). I ran Ubuntu on my main computer from 2005 to 2011. I was happy to see Ubuntu become a “juggernaut” and I think like many hoped for it to become mainstream, largely indicated by major vendor preinstallation. The high point for me, which I seem to have never blogged about, was in 2007 purchasing a very well priced Dell 1420n with Ubuntu preinstalled.

But the juggernaut stalled at the top of the desktop GNU/Linux distribution heap, which isn’t very high. Although people have had various complaints about Ubuntu and Canonical Ltd., as I’ve written before my overriding disappointment is that they haven’t been much more successful. There are a couple tiny vendors that focus exclusively or primarily on shipping Ubuntu-preinstalled consumer hardware, and Dell or another major vendor occasionally offers something — Dell has had a developer edition Ubuntu preinstall for a couple years, usually substantially out of date, as the current offering is now.

Canonical seems to have followed Red Hat and others in largely becoming an enterprise/cloud servicing company, though apparently they’re still working on an Ubuntu flavor for mobile devices (and I haven’t followed, but I imagine that Red Hat still does some valuable engineering for the desktop). I wish both companies ever more success in these ventures — more huge companies doing only or almost only open source are badly needed, even imperfect ones.

For Ubuntu fans, this seems like a fine time to ask why it hasn’t been even more successful. Why hasn’t it achieved consistent and competitive mainstream vendor distribution? How much, if any blame can be laid at Canonical’s stumbles with respect to free/open source software? It seems to me that a number of Canonical products would have been much more likely to be dominante had they been open source from the beginning (Launchpad, Ubuntu One) or not required a Contributor License Agreement (bzr, Upstart, Mir), would not have alienated a portion of the free/open source software community, and that the world would overall be a better place had most of those products won — the categories of the first two remain dominated by proprietary services, and the latter three might have gained widespread adoption sooner than the things that eventually did or will probably win (git, systemd, wayland). But taking a step back, it’s really hard to see how these stumbles (that’s again from an outsider free/open source perspective; maybe they are still seen as having been the right moves at the time inside Canonical; I just don’t know) might have contributed in a major way to lack of mainstream success. Had the stumbles been avoided, perhaps some engineering resources would have been better allocated or increased, but unless reallocated with perfect hindsight as to what the technical obstacles to mainstream adoption were — an impossibility — I doubt they made much of a difference. What about alientation of a portion of the free/open source community? Conceivably had they (we) been more enthusiastic, more consumer lobbying/demand for Ubuntu preinstalls would have occurred, and tipped a balance — but that seems like wishful thinking, requiring a level of perfect organizing of GNU/Linux fan consumer demand that nobody has achieved. I’d love to believe that had Canonical stuck closer to a pure free/open source software path, it’d have achieved greater mainstream success, but I just don’t see much of a causal link. What are the more likely causes? I’d love to read an informed analysis.

For Ubuntu detractors, this seems like a fine time to ask why Ubuntu has been a juggernaut relative to your preferred GNU/Linux distribution. If you’re angry at Canonical, I suggest your anger is misdirected — you should be angry instead that your preferred distribution hasn’t managed to do marketing and distribution as well as it needed to, on its own terms — and figure out why that is. Better yet, form and execute on a plan to achieve the mainstream success that Ubuntu hasn’t. Otherwise in all likelihood it’s an Android and ChromeOS (and huge Windows legacy, with some Apple stuff in between) world for a long time to come. I’d love to read a feasible plan!

LegacyOffice?

Wednesday, July 30th, 2014

LibreOffice 4.3 announcement, release notes with new feature screenshots, developer perspective. Perhaps most useful, a feature comparison of LibreOffice 4.3 and Microsoft Office 2013.

Overall a great six month release. Coming early next year: 4.4.

Steady progress is also being made on policy. “The default format for saving [UK] government documents must be Open Document Format (ODF)” — the genuinely open standard used by LibreOffice; Glyn Moody has a good writeup. I occasionally see news of large organizations migrating to LibreOffice, most recently the city of Tolouse. Hopefully many more will manage to escape “effective captivity” to a single vendor (Glyn Moody for that story too).

(My take on the broad importance of open policy and software adoption.)

Also, recent news of work on a version of LibreOffice for Android. But nothing on LibreOffice Online (in a web browser) which as far as I can tell remains a prototype. WebODF is an independent implementation of ODF viewing and editing in browser. Any of these probably require lots of work to be as effective of a collaboration solution as Google Docs — much of the work outside the editing software, e.g. integration with “sharing” mechanisms (e.g., WebODF and ownCloud) and ready availability of deployments of those mechanisms (Sandstorm is promising recent work on deployment, especially security).

From what I observe, Google Docs has largely displaced (except for large or heavily formatted for print or facsimile) Microsoft Office, though I guess that’s not the case in large organizations with internal sharing mechanisms. I suspect Google Docs (especially spreadsheets) has also expanded the use of “office” software, in part replacing wiki use cases. Is there any reason to think that free/open source software isn’t as far behind now as it was in 2000 before the open source release of OpenOffice.org, LibreOffice’s predecessor?

Still, I consider LibreOffice one of the most important free software projects. I expect it will continue to be developed and used on millions of “legacy” desktops for decades after captivity to Microsoft is long past, indeed after desktop versions of Microsoft Office long EoL’d. Hopefully LibreOffice’s strong community, development, governance, and momentum (all vastly improved over OpenOffice.org) in combination with open policy work (almost non-existent in 2000) and other projects will obtain much better than even this valuable result.

Open policy for a secure Internet-N-Life

Saturday, June 28th, 2014

(In)Security in Home Embedded Devices Jim Gettys says software needs to be maintained for decades considering where it is being deployed (e.g., embedded in products with multi-decade lifetimes, such as buildings) and the criticality of some of that software, an unpredictable attribute — a product might become unplanned “infrastructure” for example if it is widely deployed and other things come to depend on it. Without maintenance, including deployment of updates in the field, software (and thus systems it is embedded in) becomes increasingly insecure as vulnerabilities are discovered (cites a honeymoon period enjoyed by new systems).

This need for long-term maintenance and field deployment implies open source software and devices that users can upgrade — maintenance needs to continue beyond the expected life of any product or organization. “Upgrade” can also mean “replace” — perhaps some kinds of products should be more modular and with open designs so that parts that are themselves embedded systems can be swapped out. (Gettys didn’t mention, but replacement can be total. Perhaps “planned obsolescence” and “throwaway culture” have some security benefits. I suspect the response would be that many things continue to be used for a long time after they were planned to be obsolete and most of their production run siblings are discarded.)

But these practices are currently rare. Product developers do not demand source from chip and other hardware vendors and thus ship products with “binary blob” hardware drivers for Linux kernel which cannot be maintained, often based on kernel years out of date when product is shipped. Linux kernel near-monoculture for many embedded systems, increasing security threat. Many problems which do not depend on hardware vendor cooperation, ranging from unintentionally or lazily not providing source needed for rest of system, to intentionally shipping proprietary software, to intentionally locking down device to prevent user updates. Product customers do not demand long-term secure devices from product developers. There is little effort to fund commons-oriented embedded development (in contrast with Linux kernel and other systems development for servers, which many big companies fund).

Gettys is focused on embedded software in network devices (e.g., routers) as network access is critical infrastructure much else depends on, including the problem at hand: without network access, many other systems cannot be feasibly updated. He’s working on CeroWrt a cutting edge version of OpenWrt firmware, either of which is several years ahead of what typically ships on routers. A meme Gettys wishes to spread, the earliest instance of which I could find is on cerowrt-devel, a harsh example coming the next week:

Friends don’t let friends run factory firmware.

Cute. This reminds me of something a friend said in a group discussion that touched on security and embedded in body (or perhaps it was mind embedded in) systems, along the lines of “I wouldn’t run (on) an insecure system.” Or malware would give you a bad trip.

But I’m ambivalent. Most people, thus most friends, don’t know what factory firmware is. Systems need to be much more secure (for the long term, including all that implies) as shipped. Elite friend advice could help drive demand for better systems, but I doubt “just say no” will help much — its track records for altering mass outcomes, e.g., with respect to proprietary software or formats, seems very poor.

In Q&A someone asked about centralized cloud silos. Gettys doesn’t like them, but said without long-term secure alternatives that can be deployed and maintained by everyone there isn’t much hope. I agree.

You may recognize open source software and devices that users can upgrade above as roughly the conditions of GPL-3.0. Gettys mentioned this and noted:

  • It isn’t clear that copyright-based conditions are effective mechanism for enforcing these conditions. (One reason I say copyleft is a prototype for more appropriate regulation.)
  • Of “life, liberty, and pursuit of happiness”, free software has emphasized the latter two, but nobody realized how important free software would be for living one’s life given the extent to which one interacts with and depends on (often embedded) software. In my experience people have realized this for many years, but it should indeed move to the fore.

Near the end Gettys asked what role industry and government should have in moving toward safer systems (and skip the “home” qualifier in the talk title; these considerations are at least as important for institutions and large-scale infrastructure). One answer might be in open policy. Public, publicly-interested, and otherwise coordinated funders and purchasers need to be convinced there is a problem and that it makes sense for them to demand their resources help shift the market. The Free Software Foundation’s Respects Your Freedom criteria (ignoring the “public relations” item) is a good start on what should be demanded for embedded systems.

Obviously there’s a role for developers too. Gettys asked how to get beyond the near Linux kernel monoculture, mentioning BSD. My ignorant wish is that developers wanting to break the monoculture instead try to build systems using better tools, at least better languages (not that any system will reduce the need for security in depth).

Here’s to a universal, secure, and resilient web and technium. Yes, these features cost. But I’m increasingly convinced that humans underinvest in security (not only computer, and at every level), especially in making sure investments aren’t theater or worse.

{ "title" : "API commons II" }

Tuesday, June 24th, 2014

API Voice:

Those two posts by API Evangelist (another of his sites) Kin Lane extract bits of my long post on these and related matters, as discussed at API Con. I’m happy that even one person obtained such clear takeaways from reading my post or attending the panel.

Quick followups on Lane’s posts:

  • I failed to mention that never requiring permission to implement an API must include not needing permission to reverse engineer or discover an undocumented API. I do not know whether this implies in the context of web service APIs has been thoroughly explored.
  • Lane mentions a layer that I missed: the data model or schema. Or models, including for inputs and outputs of the API, and of whatever underlying data it is providing access to. These may fall out of other layers, or may be specified independently.
  • I reiterate my recommendation of the Apache License 2.0 as currently the best license for API specifications. But I really don’t want to argue with pushing CC0, which has great expressive value even if it isn’t absolutely perfect for the purpose (explicit non-licensing of patents).

API commons

Thursday, May 29th, 2014

Notes for panel The API Copyright Emergency: What’s Next? today at API Con SF. The “emergency” is the recent decision in Oracle v. Google, which I don’t discuss directly below, though I did riff on the ongoing case last year.

I begin with and come back to a few times Creative Commons licenses as I was on the panel as a “senior fellow” for that organization, but apart from such emphasis and framing, this is more or less what I think. I got about 80% of the below in on the panel, but hopefully still worth reading even for attendees.

A few follow-up thoughts after the notes.

Creative Commons licenses, like other public licenses, grant permissions around copyright, but as CC’s statement on copyright reform concludes, licenses “are not a substitute for users’ rights, and CC supports ongoing efforts to reform copyright law to strengthen users’ rights and expand the public domain.” In the context of APIs, default policy should be that independent implementation of an API never require permission from the API’s designer, previous implementer, or other rightsholder.

Without such a default policy of permission-free innovation, interoperability and competition will suffer, and the API community invites late and messy regulation at other levels intending to protect consumers from resulting lock-in.

Practically, there are things API developers, service providers, and API consumers can do and demand of each other, both to protect the community from a bad turn in default policy, and to go further in creating a commons. But using tools such as those CC provides, and choosing the right tools, requires looking at what an API consists of, including:

  1. API specification
  2. API documentation
  3. API implementations, server
  4. API implementations, client
  5. Material (often “data”) made available via API
  6. API metadata (e.g, as part of API directory)

(depending on construction, these could all be generated from an annotated implementation, or could each be separate works)

and what restrictions can be pertinent:

  1. Copyright
  2. Patent

(many other issues can arise from providing an API as a service, e.g., privacy, though those are usually not in the range of public licenses and are orthogonal to API “IP”, so I’ll ignore them here)

1-4 are clearly works subject to copyright, while 5 and 6 may or may not be (e.g., hopefully not if purely factual data). Typically only 3 and 4 might be restricted by patents.

Standards bodies typically do their work primarily around 1. Relatively open ones, like the W3C, obtain agreement from all contributors to the standard to permit royalty-free implementation of the standard by anyone, typically including a patent license and permission to prepare and perform derivative works (i.e., copyright, to extent such permission is necessary). One option you have is to put your API through an existing standards organization. This may be too heavyweight, or may be appropriate yet if your API is really a multi-stakeholder thing with multiple peer implementations; the W3C now has a lightweight community group venue which might be appropriate. The Open Web Foundation’s agreements allow you to take this approach for your API without involvement of an existing standards body​. Lawrence Rosen has/will talk about this.

Another approach is to release your API specification (and necessarily 2-4 to the extent they comprise one work, ideally even if they are separate) under a public copyright license, such as one of the CC licenses, the CC0 public domain dedication, or an open source software license. Currently the most obvious choice is the Apache License 2.0, which grants copyright permission as well as including a patent peace clause. One or more of the CC licenses are sometimes suggested, perhaps because specification and documentation are often one work, and the latter seems like a “creative” work. But keep in mind that CC does not recommend using its licenses for software, and instead recommends using an open source software licenses (such as Apache): no CC license includes explicit patent permission, and depending on the specific CC license chosen, it may not be compatible with software licenses, contrary to goal of granting clear permission for independent API implementation, even in the face of a bad policy turn.

One way to go beyond mitigating “API copyrightability” is to publish open source implementations, preferably production, though reference implementations are better than nothing. These implementations would be covered by whatever copyright and patent permissions are granted by the license they are released under — again Apache 2.0 is a good choice, and for software implementation CC licenses should not be used; other software licenses such as [A]GPL might be pertinent depending on business and social goals.

Another way to create a “thick” API commons is to address material made available via APIs, and metadata about APIs. There, CC tools are likely pertinent, e.g., use CC0 for data and metadata to ensure that “facts are free”, as they ought be in spite of other bad policy turns.

To get even thicker, consider the architecture, for lack of a better term, around API development, services, and material accessed and updated via APIs. Just some keywords: Linked Open Data, P2P, federation, Lots of Copies Keep Stuff Safe, collaborative curation.

The other panelists were Pamela Samuelson, Lawrence Rosen, and Annette Hurst, moderated by David Berlind.

I’m fairly familiar with Samuelson’s and Rosen’s work and don’t have comments on what they said on the panel. If you want to read more, I recommend among Samuelson’s papers The Strange Odyssey of Software Interfaces and Intellectual Property Law which shows that the “API copyright emergency” of the panel title is recurrent and intertwined with patent, providing several decades of the pertinent history up to 2008. Contrary to my expectation in the notes above, Rosen didn’t get a chance to talk about the Open Web Foundation agreements, but you can read his 2010 article Implementing Open Standards in Open Source which covers OWF.

Hurst is a lawyer for Orrick representing Oracle in the Oracle v. Google case, so understandably advocated for API copyright, but in the process made several deeply flawed assertions could have consumed the entire duration of the panel, but Berlind did a good job of keeping the conversation moving forward. Still, I want to mention two high level ones here, my paraphrases and responses:

Without software copyright the software economy would go away. This is refuted by software development not for the purposes of selling licenses (which is the vast majority of it), especially free/open source software development, and services (e.g., API provision, the source of which is often never published, though it ought be, see “going beyond” recommendations above). Yes the software economy would change, with less winner-take-all monopoly and less employment for Intellectual Parasite lawyers. But the software economy would be huge and very competitive. Software is eating the world, remember? One way to make it help rather than pejoratively eat the world is to eject the parasites along for the ride.

Open source can’t work without software copyright. This is refuted by 1) software source sharing before software copyright; 2) preponderance of permissively licensed open source software, in which the terms do not allow suing downstream developers who do not share back; 3) the difficulty of enforcing copyleft licenses which do allow for suing downstream developers who do not share back; 4) the possibility of non-copyright regulation to force sharing of source (indeed I see the charitable understanding of copyleft as prototyping such regulation; for perspective on the Oracle v. Google case from someone with a more purely charitable understanding of copyleft, see Bradley Kuhn); and 5) demand and supply mechanisms for mandating sharing of source (e.g., procurement policies, distribution policies such as Debian’s).

These came up because Hurst seemed to really want the audience to conflate software copyright in general (not at issue in the case, settled in a bad place since the early 1980s) and API copyright specifically. Regarding the latter, another point which could have been made is the extent to which free/open source software has been built around providing alternatives to proprietary software, often API-compatible. If API copyright could prevent compatible implementation without permission of any sort, open source, competition, and innovation would all be severely hampered.

There is a recent site called API Commons, which seems to be an API directory (Programmable Web, which ran the conference, also has one). My general suggestion to both would be to implement and facilitate putting all elements of APIs listed above in my notes in the commons. For example, they could clarify that API metadata they collect is in the public domain, publish it as Linked Open Data, and encourage API developers and providers they catalog to freely license specifications, documentation, implementations, and data, and note such in the directories.

In order to get a flavor for the conference, I listened to yesterday morning’s keynotes, both of which made valiant attempts to connect big picture themes to day to day API development and provision. Allow me to try to make connections back to “API commons”.

Sarah Austin, representing the San Francisco YMCA, pointed out that the conference is near the Tenderloin neighborhood, the poorest in central San Francisco. Austin asked if kids from the Tenderloin would be able to find jobs in the “API economy” or would they be priced out of the area (many tech companies have moved nearby in the last years, Twitter perhaps the best known).

Keith Axline claimed The Universe Is Programmable. We Need an API for Everything, or to some extent, that learning about the universe and how to manipulate it is like programming. Axline’s talk seemed fairly philosophical, but could be made concrete with reference to the Internet of Things, programmable matter, robots, nanobots, software eating the world … much about the world will indeed soon be software (programmable) or obsolete.

Axline’s conclusion was in effect largely about knowledge policy, including mourning energy wasted on IP, and observing that we should figure out public support for science or risk a programmable world dominated by IP. That might be part of it, but keeps the focus on funding, which is just where IP advocates want it — IP is an off-the-balance-sheets, “free” taking. A more direct approach is needed — get the rules of knowledge policy right, put freedom and equality as its top goals, reject freedom infringing regimes, promote commons (but mandating all these as a condition of public and publicly interested funding is a reasonable starting place) — given these objectives and constraints, then argue about market, government, or other failure and funding.

Knowledge policy can’t directly address the Austin’s concerns in the Tenderloin, but it does indirectly affect them, and over the long term tremendously affect them, in the Tenderloin and many other places. As the world accelerates its transition from an industrial to a knowledge dominated economy, will that economy be dominated by monopoly and inequality or freedom and equality? Will the former concentrations continue to abet instances of what Jane Jacobs called “catastrophic money” rushing into ill-prepared neighborhoods, or will the latter tendencies spread the knowledge, wealth, and opportunity?