Post Peeves

Keep Fighting Forward

Tuesday, February 11th, 2014

Today is the day to mass call for regulation of mass surveillance. I did, please do it too.

I’m still underwhelmed by the rearguard nature of such actions, wonder how long they continue to be effective (e.g., when co-opted, or when policymakers realize mass calls don’t translate into votes, or forever…since at least 1996), and am even enraged by their focus on symptoms. But my feelings are probably wrong. Part of me applauds those who enjoy fighting the shortest term and broadest appeal possible battles. Such probably helps prevent things from getting worse, at least for a time, and that’s really valuable. Anyone who believes things must get worse before they get better is dangerous, because that’s when real trolls take over, damn your revolution.

I enjoyed Don Marti’s imperfect but perfectly provocative analogy, which I guess implies (he doesn’t say) the correct response to mass surveillance is to spend on end-to-end crypto, rejection of private tracking, decentralization, and other countermeasures, sealing net communications from security state poison. I’m all for that, and wish advocacy for same were a big part of mass calls to action like today’s. But I see the two as mostly complementary, as much as I’d like to scream “you’re doing it entirely wrong!”

Also QuestionCopyright’s assertion that Copyright + Internet = Surveillance. Or another version: Internet, Privacy, Copyright; Choose Two. I could quibble that these are too weak (freedom was infringed by copyright before the net) and too strong (not binary), but helpfully provocative.

Addendum: Also, Renata Avila:

For me is . Otherwise, we will be in serious trouble. Donate to resistance tools like or

Sleepwalking past Freedom’s Commons, or how peer production could increase democracy, equality, freedom, and innovation, all of them!

Sunday, February 9th, 2014

2007:

The most interesting parts of ‘s The Wealth of Networks concern how peer production facilitates liberal values. I’ll blog a review in the fullness of time.

In lieu of that which may never come, some motivated notes on Coase’s Penguin, or Linux and the Nature of the Firm (2002, 78 pages) and Freedom in the Commons: Towards a Political Economy of Information (2003, 32 pages; based on a 2002 lecture). A friend wanted to trial a book group with the former. Re-reading that led me to the latter, which I hadn’t read before. Reading them together, or even just the latter, might be a good alternative to reading The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006, 473 pages).

As might be expected from decade plus old internet research, some of the examples in the papers and book are a bit stale, but sadly their fundamental challenge remains largely unacknowledged, and only taken as a byproduct. I would love to be convinced otherwise. Is the challenge (or my extrapolation) wrong, unimportant, or being met satisfactorily?

Excerpts from Freedom in the Commons (emphasis added by me in all quotes that follow):

[Commons-based peer production] opens a range of new opportunities for pursuing core political values of liberal societies—democracy, individual freedom, and social justice. These values provide three vectors of political morality along which the shape and dimensions of any liberal society can be plotted. Because, however, they are often contradictory rather than complementary, the pursuit of each of these values places certain limits on how we conceive of and pursue the others, leading different liberal societies to respect them in different patterns.

An underlying efficient limit on how we can pursue any mix of arrangements to implement our commitments to democracy, autonomy, and equality, however, has been the pursuit of productivity and growth.

[Commons-based peer production] can move the boundaries of liberty along all three vectors of liberal political morality.

There is no benevolent historical force, however, that will inexorably lead the technological-economic moment to develop towards an open, diverse, liberal equilibrium. If the transformation occurs, it will lead to substantial redistribution of power and money from the twentieth-century, industrial producers of information, culture, and communications—like Hollywood, the recording industry, and the telecommunications giants—to a widely diffuse population around the globe. None of the industrial giants of yore are going to take this redistribution lying down. Technology will not overcome their resistance through some insurmountable progressive impulse. The reorganization of production, and the advances it can bring in democracy, autonomy, and social justice will emerge, if it emerges, only as a result of social and political action. To make it possible, it is crucial that we develop an understanding of what is at stake and what are the possible avenues for social and political action. But I have no illusions, and offer no reassurances, that any of this will in fact come to pass. I can only say that without an effort to focus our attention on what matters, the smoke and mirrors of flashy toys and more convenient shopping will be as enlightening as Aldous Huxley’s soma and feelies, and as socially constructive as his orgy porgy.

Let us think, then, of our being thrust into this moment as a challenge. We are in the midst of a technological, economic, and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will largely depend on policy choices that we will make over the next decade or two. To be able to understand these choices, to be able to make them well, we must understand that they are part of a social and political choice—a choice about how to be free, equal, and productive human beings under anew set of technological and economic conditions. As economic policy, letting yesterday’s winners dictate the terms of economic competition tomorrow is disastrous. As social policy, missing an opportunity to enrich democracy, freedom, and equality in our society, while maintaining or even enhancing our productivity, is unforgivable.

Although the claim that the Internet leads to some form or another of “decentralization” is not new, the fundamental role played in this transformation by the emergence of non-market, nonproprietary production and distribution is often over-looked, if not willfully ignored.

First, if the networked information economy is permitted to emerge from the institutional battle, it will enable an outward shift of the limits that productivity places on the political imagination. Second, a society committed to any positive combination of the three values needs to adopt robust policies to facilitate these modes of production,because facilitating these modes of production does not represent a choice between productivity and liberal values, but rather an opportunity actually to relax the efficient limit on the plausible set of political arrangements available given the constraints of productivity.

We are at a moment in our history at which the terms of freedom and justice are up for grabs. We have an opportunity to improve the way we govern ourselves—both as members of communities and as autonomous individuals. We have an opportunity to be more just at the very core of our economic system. The practical steps we must take to reshape the boundaries of the possible in political morality and to improve the pattern of liberal society will likely improve productivity and growth through greater innovation and creativity. Instead of seizing these opportunities, however, we are sleepwalking.

What arrangements favor reorganization towards commons-based peer production? From Coase’s Penguin:

This suggests that peer production will thrive where projects have three characteristics. First, they must be modular. That is, they must be divisible into components, or modules, each of which can be produced of the production of the others. This enables production to be incremental and asynchronous, pooling the efforts of different people, with different capabilities, who are available at different times. Second, the granularity of the modules is important and refers to the sizes of the project’s modules. For a peer production process to pool successfully a relatively large number of contributors, the modules should be predominately fine-grained, or small in size. This allows the project to capture contributions from large numbers of contributors whose motivation levels will not sustain anything more than small efforts toward the project. Novels, for example, at least those that look like our current conception of a novel, are likely to prove resistant to peer production. In addition, a project will likely be more efficient if it can accommodate variously sized contributions. Heterogeneous granularity will allow people with different levels of motivation to collaborate by making smaller- or larger-grained contributions, consistent with their levels of motivation. Third, and finally, a successful peer production enterprise must have low-cost integration, which includes both quality control over the modules and a mechanism for integrating the contributions into the finished product.

Regulators concerned with fostering innovation may better direct their efforts toward providing the institutional tools that would help thousands of people to collaborate without appropriating their joint product, making the information they produce freely available rather than spending their efforts to increase the scope and sophistication of the mechanisms for private appropriation of this public good as they now do.

That we cannot fully understand a phenomenon does not mean that it does not exist. That a seemingly growing phenomenon refuses to fit our longstanding perceptions of how people behave and how economic growth occurs counsels closer attention, not studied indifference and ignorance.  Commons-based peer production presents a fascinating phenomenon that could allow us to tap substantially underutilized reserves of human creative effort. It is of central importance that we not squelch peer production, but that we create the institutional conditions needed for it to flourish.

There’s been some progress on institutional tools (i.e., policy arrangements writ large, the result of “political action” above) in the 11 or so years since (e.g., Open Access mandates), but not nearly enough to outweigh global ratcheting of intellectual freedom infringing regimes, despite the occasional success of rearguard actions against such ratcheting. Neither these rearguard actions, nor mainstream (nor reformist) discussion of “reform” put commons at the center of their concerns. The best we can expect from this sleepwalking is to muddle through, with policy protecting and promoting commons where such is coincidentally aligned with some industrial interest (often simplified to “Google” in the past several years, but that won’t last forever).

My extrapolation (again, tell me if facile or wrong): shifting production arrangements so as to favor commons-based peer production is as important as, complementary to, and almost necessary for positive policy change. Commons-based product competition simultaneously changes the facts on the ground, the range of policies imaginable, and potentially create a commons “industrial” interest group which is recognizably important to regulators and makes commons-based peer production favoring policy central to its demands — the likely Wikimedia response to the European Commission copyright consultation is a hopeful example.

There has been lots of progress on improving commons-based peer production (e.g., some trends), but also not nearly enough to keep up with proprietary innovation, particularly lacking and missing huge opportunities where proprietary incumbents real advantages sit — not production per se, but funding and distribution/marketing/cultural relevance making. Improving commons-based peer production, shifting the commanding heights (i.e., Hollywood premium video and massively expensive and captured pharma regulatory apparatus) to forms more amenable to commons-based peer production, and expanding the scope of commons-based peer production to include funding and relevance making are among the most potent political projects of our time.

Wake up. ^_^

RDFa initial context & one dc:

Tuesday, February 4th, 2014

One of the nice things to come out of RDFa 1.1 is its initial context — a list of vocabularies with prefixes which may be used without having to define locally. In other words, just write, e.g., property="dc:title" without having to first write prefix="dc: http://purl.org/dc/terms/".

In addition to making RDFa a lot less painful to use, the list is a good starting place for figuring out what vocabularies to use (if you must), perhaps even for non-RDFa applications — the list is machine-readable of course; I was reminded to write this post when giving feedback on a friend’s proposal to use prefix:property headers in a CSV file for a custom application, and by a recent announcement of the addition of three new predefined prefixes.

Survey data such as Linked Open Vocabularies can also help figure out what to use. Unfortunately LOV and the RDFa 1.1 initial context don’t agree 100% on prefix naming, and neither provides much in the way of guidance. I think there’s room for a highly opinionated and regularly updated guide to what vocabularies to use. I’m no expert, it probably already exists — please inform me!

dc:

The first thing I’d put in such an opinionated guide is to start one’s vocabulary search with Dublin Core. Trivial, right? But there is an under-documented subtlety which I find myself pointing out when a friend runs something like the aforementioned by me — DC means DC Terms. While it’s obvious that DC Terms is a superset of DC Elements, it’s harder to find evidence that using the former is best practice for new applications, and that the latter is not still the canonical vocabulary to start with. What I’ve gathered on this follows. I realize that the URIs for individual properties and classes, the prefixes used to abbreviate those URIs, and the documents which define (in English and RDF) properties and classes are distinct but interdependent. Prefixes are surely the most trivial and uninteresting, but for most people I imagine they’re important signals and documentation, thus I go on about them…

Namespace Policy for the Dublin Core Metadata Initiative (DCMI) (emphasis added):

The DCMI namespace URI for the collection of legacy properties that make up the Dublin Core Metadata Element Set, Version 1.1 [DCMES] is: http://purl.org/dc/elements/1.1/

Dublin Core Metadata Element Set, Version 1.1 (emphasis added):

Since 1998, when these fifteen elements entered into a standardization track, notions of best practice in the Semantic Web have evolved to include the assignment of formal domains and ranges in addition to definitions in natural language. Domains and ranges specify what kind of described resources and value resources are associated with a given property. Domains and ranges express the meanings implicit in natural-language definitions in an explicit form that is usable for the automatic processing of logical inferences. When a given property is encountered, an inferencing application may use information about the domains and ranges assigned to a property in order to make inferences about the resources described thereby.

Since January 2008, therefore, DCMI includes formal domains and ranges in the definitions of its properties. So as not to affect the conformance of existing implementations of “simple Dublin Core” in RDF, domains and ranges have not been specified for the fifteen properties of the dc: namespace (http://purl.org/dc/elements/1.1/). Rather, fifteen new properties with “names” identical to those of the Dublin Core Metadata Element Set Version 1.1 have been created in the dcterms: namespace (http://purl.org/dc/terms/). These fifteen new properties have been defined as subproperties of the corresponding properties of DCMES Version 1.1 and assigned domains and ranges as specified in the more comprehensive document “DCMI Metadata Terms” [DCTERMS].

Implementers may freely choose to use these fifteen properties either in their legacy dc: variant (e.g., http://purl.org/dc/elements/1.1/creator) or in the dcterms: variant (e.g., http://purl.org/dc/terms/creator) depending on application requirements. The RDF schemas of the DCMI namespaces describe the subproperty relation of dcterms:creator to dc:creator for use by Semantic Web-aware applications. Over time, however, implementers are encouraged to use the semantically more precise dcterms: properties, as they more fully follow emerging notions of best practice for machine-processable metadata.

The first two paragraphs explain why a new vocabulary was minted (so that the more precise definitions of properties already in DC Elements do not change the behavior of existing implementations; had only new terms and classes been added, maybe they could have been added to the DC Elements vocabulary, but maybe this is ahistoric, as many of the additional “qualified” DC Terms existed since 2000). The third paragraph explains that DC Terms should be used for new applications. Unfortunately the text informally (the prefixes aren’t used anywhere) notes the prefixes dc: and dcterms:, which I’ve found is not helpful in getting people to focus only on DC Terms.

Expressing Dublin Core metadata using the Resource Description Framework also notes the dc: and dcterms: prefixes for use in the document’s examples (which don’t ever actually use dc:).

Some of these documents have been updated slightly, but I believe their current versions are little changed from about 2008, a year after the proposal of the DC Terms refinements.

How to use DCMI Metadata as linked data uses the dc: and dcterms: prefixes and is clear about the ranges of properties of each: there is no incorrect usage of, e.g., purl.org/dc/elements/1.1/creator because it has no defined range nor domain, while purl.org/dc/terms/creator must be a non-literal, a purl.org/dc/terms/Agent. Perhaps this makes DC Terms seem scarier and partially explains the persistence of DC Elements. More likely I’d guess few know about the difference and lots of use of the DC Terms with non-literal ranges are used with literals in the wild (I might be guilty on occasion).

FAQ/DC and DCTERMS Namespaces:

It is not incorrect to continue using dc:subject and dc:title — alot of Semantic Web data still does — and since the range of those properties is unspecified, it is not actually incorrect to use (for example) dc:subject with a literal value or dc:title with a non-literal value. However, good Semantic Web practice is to use properties consistently in accordance with formal ranges, so implementers are encouraged to use the more precisely defined dcterms: properties.
Update, December 2011: It is worth noting that the Schema.org initiative is taking a pragmatic approach towards the formal ranges of their properties:

We also expect that often, where we expect a property value of type Person, Place, Organization or some other subClassOf Thing, we will get a text string. In the spirit of “some data is better than none”, we will accept this markup and do the best we can.

What constitutes “best practice” in this area is bound to evolve with implementation experience over time.

There you have people supplying literals for properties expecting non-literals. Schema.org RDF mappings do not formally condone this pragmatic approach, otherwise you’d see the likes of (addition in bold):

schema:creator a rdf:Property;
    rdfs:label "Creator"@en;
    rdfs:comment "The creator/author of this CreativeWork or UserComments. This is the same as the Author property for CreativeWork."@en;
    rdfs:domain [ a owl:Class; owl:unionOf (schema:UserComments schema:CreativeWork) ];
    rdfs:range [ a owl:Class; owl:unionOf (schema:Organization schema:Person xsd:string) ];
    rdfs:isDefinedBy ;
    rdfs:isDefinedBy ;

Also from 2011, a discussion of what prefixes to use in the RDFa initial context. Decision (Ivan Herman):

For the records: after having also discussed on yesterday’s telecom, I have made the changes on the profile files yesterday evening. The prefix set in the profile for http://purl.org/dc/terms/ is set to ‘dc’.

Read the expert input of Dan Brickley, Mikael Nilsson, and Thomas Baker. The initial context defines both dc: and dcterms: as prefixes for DC Terms, relegating DC Elements to dc11::

dc http://purl.org/dc/terms/ Dublin Core Metadata Terms DCMI Metadata Terms
dcterms http://purl.org/dc/terms/ Dublin Core Metadata Terms DCMI Metadata Terms
dc11 http://purl.org/dc/elements/1.1/ Dublin Core Metadata Element Set, Version 1.1 Dublin Core Metadata Element Set, Version 1.1

I found the above discussion on LOV’s entries for DC Terms and DC Elements, which use dcterms: and dce: prefixes respectively:

(2013-03-07) Bernard Vatant: Prefix restored to dcterms

(2013-06-17) Bernard Vatant: Although “dc” is often used as the prefix for this vocabulary, it’s also sometimes used for DC terms, so we preferred to use the less ambiguous “dce” and “dcterms” in LOV. See usage at http://prefix.cc/dc, http://prefix.cc/dce, http://prefix.cc/dcterms, and more discussion at http://bit.ly/uPuUTT.

I think the discussion instead supports using dc: and dc11: (because that’s what the RDFa initial context uses) instead. LOV doesn’t have a public source repository or issue tracker currently, but I understand it eventually will.

Now I have this grab-bag blog post to send to friends who propose using DC Elements. Please correct me if I’m wrong, and especially if a more concise (on this topic) and credible document exists, so I can send that instead; perhaps something like an opinionated guide to metadata mentioned way above.

Another topic such a guide might cover, perhaps as a coda, would be what to do if you really need to develop a new vocabulary. One thing is you really need to ask for help. The W3C now provides some infrastructure for doing this. Or, some qualified dissent from a hugely entertaining blogger called Brinxmat.

Some readers of my blog who have bizarrely read through this post, or skipped to the end, might enjoy Brinxmat’s Attribution licences for data and why you shouldn’t use them (another future issue report for LOV, which uses CC-BY?); I wrote a couple posts in the same blogversation; also a relevant upgrade exhortation.

Technology and wealth Inequality Promotion

Thursday, January 30th, 2014

Sam Altman, Technology and wealth inequality:

Without intervention, technology will probably lead to an untenable disparity—so we probably need some amount of intervention. Technology also increases the total wealth in a way that mostly benefits everyone, but at some point the disparity just feels so unfair it doesn’t matter.

This widening wealth divide is happening at all levels—people, companies, and countries. And either it will keep going, or innovation will stop.

The very first intervention ought be in our innovation policy, which presently is tuned to maximize concentration of wealth and minimize the access of everyone to the benefits of innovation — because our innovation policy is a property/rent seeking regime. A few data points.

Such an intervention won’t stop innovation, but might change it, and we should want that. Beautiful progress is that which is produced by a freedom and equality respecting regime. We ought be suspicious and ashamed of progress which depends on infringing freedom and promoting inequality. If mass spectacle ends when the regime falls, all the better. We’ll love whatever culture we have and create, will be amazed by its innovation, in part encouraged through non-enclosing innovation policy.

If innovation-driven inequality is a big problem, we ought be more highly valuing (including figuring out how to characterize that value) and promoting existing systems which depend on and promote freedom and equality, i.e., commons-based ones such as free/open source software and the Wikimedia movement (and recursively working on equality and diversity within those systems).

Innovation could tend to increase inequality independent of wealth concentrating, property/rent-seeking based innovation policies and other political factors. If this is the case (or honestly even if it is not), I’m always disappointed that progressivity of tax systems isn’t central to the debate — and I don’t mean marginal income tax rates. Basically property > income > sales. Further, property property can’t be moved and taxing it doesn’t require extensive privacy invasions. In theory I’d expect the strongest states and most free and equal societies of the future to strongly prefer real property taxation over other systems. But perhaps path dependencies and other factors will swamp this (and innovation policy as well).

Annual thematic doubt

Friday, January 10th, 2014

As promised, my first annual thematic doubt post, expressing doubts I have about themes I blogged about during 2013.

Intellectual Freedom

If this blog were to have a main purpose other than serving as a despository for my tangents, it’d be protecting and promoting intellectual freedom, in particular through the mechanisms of free/open/knowledge commons movements, and in reframing information and innovation policy with freedom and equality outcomes as top. Some representative posts: Economics and the Commons Conference [knowledge stream] report, Flow ∨ incentive 2013 anthology winner, z3R01P. I’m also fond of pointing out where these issues surface in unusual places and surfacing them where they are latent.

I’m fairly convinced on this theme: regimes infringing on intellectual freedom are individual and collective mind-rot, and “merely” accentuate the tendencies toward inequality and control of whatever systems they are embedded in. Mitigating, militating against, outcompeting, and abolishing such regimes are trivially for the good, low risk, and non-revolutionary. But sure, I have doubts:

  • Though I see their accentuation of inequality and control as increasingly important, and high leverage for determining future outcomes, copyright and patent could instead be froth. The cause of intellectual freedom might be better helped by fighting for traditional free speech issues, for tolerance, against mass incarceration, against the drug war, against war, against corruption, for whatever one’s favored economic system is…
  • The voluntarily constructed commons that I emphasize (e.g., free software, open access) could be a trap: everything seems to grow fast as population (and faster, internet population) grows, but this could cloud these commons being systematically outcompeted. Rather than being undersold, product competition from the commons will never outgrow their dwarfish forms, will never shift nor take the commanding heights (e.g., premium video, pharma) and hence are a burden to both policy and beating-of-the-bounds competition. Plus, copyright and the like are mind-rot: generations of commons activists minds have been rotted and co-opted by learning to work within protectionist regimes rather than fighting and ignoring them.
  • An intellectual freedom infringing regime which produced faster technical innovation than an intellectual freedom respecting regime could render the latter irrelevant, like industrial societies rendered agricultural societies irrelevant, and agricultural societies rendered hunter-gatherer societies irrelevant, whatever the effects of those transitions on freedom and other values were. I don’t believe the current regime is anywhere close to being such a thing, nor are the usual “IP maximalism” reforms taking it in that direction. But it is possible that innovation policy is all that matters. Neither freedom and equality nor the rents of incumbents matter, except as obstacles and diversions from discovering and implementing innovation policy optimized to produce the most technical innovation.

I’m not, but can easily imagine being won over by these doubts. Each merits engagement, which could result in much stronger arguments for intellectual freedom, especially knowledge commons.

Critical Cheering

Unplanned, unnoticed by me until late in the year, my most pervasive subtheme was criticism-embedded-in-praise of free/open/commons entities and actions. Representative posts, title replaced with main target: Creative Commons, crowdfunding, Defensive Patent License, Document Freedom Day, DRM-in-HTML5 outrage, EFF, federated social web, Internet Archive, Open Knowledge Foundation, SOPA/ACTA online protests, surveillance outrage, and the Wikimedia movement.

This is an old theme: examples from 2004, 2005, 2006, 2007, 2008, 2011, and 2012. 2009 and 2010 are absent, but the reason for my light blogging here bears some relation to the theme: those are the years I was, in theory, most intensely trying to “walk my talk” at Creative Commons (and mostly failed, side-tracked by trying to get the organization to follow much more basic best practices, and by vast amounts of silliness).

Doubts about the cheering part are implied in the previous section. I’ll focus on the criticism here, but cheering is the larger component, and real: of entities criticized in the above links, in 2013 I donated money to at least EFF, FSF, and Internet Archive, and uncritically promoted all of them at various points. The criticism part amounts to:

  • Gains could be had from better coordination among entities and across domains, ranging from collaboration toward a short term goal (e.g., free format adoption) to diffuse mutual reinforcement that comes from shared knowledge, appreciation, and adoption of free/open/commons tools and materials across domains (e.g., open education people use open source software as inherent part of their practice of openness, and vice versa).
  • The commons are politically potent, in at least two ways: minimally, as existence proof for creativity and innovation in an intellectual freedom respecting regime (carved out); and vastly underappreciated, as destroyer of rents dependent on the intellectual freedom infringing regime, and of resources available for defending those rents and the regime. Commons are not merely to be protected from further bad policy, but are actors in creating a good policy environment, and should be promoted at every turn.

To be clear, my criticism is not usually a call for more “radical” or “extreme” steps or messages, rather more fulsome and coordinated ones. Admittedly, sometimes it may be hard to tell the difference — and this leads to my doubts:

  • Given that coordination is hard, gaining knowledge is expensive, and optimization path dependent, the entities and movements I criticize may not have room to improve, at least not in the direction I want them to improve in. The cost of making “more fulsome and coordinated” true might be greater than mutual reinforcement and other gains.
  • See the second doubt in the previous section — competition from the commons might be futile. Rather than promoting them at every turn, they should sometimes be held up as victims of bad policy, to be protected, and sometimes hidden from policy discourse.

The first doubt is surely merited, at least for many entities on many issues. For any criticism I have in this space, it makes sense to give the criticized the benefit of the doubt; they know their constraints pretty well, while I’m just making abstract speculations. Still, I think it’s worthwhile to call for more fulsome and coordinated strategy in the interstices of these movements, e.g., conversation and even this blog, in the hope of long-term learning, played out over years in existing entities and movements, and new ones. I will try henceforth to do so more often in a “big picture” way, or through example, and less often through criticism of specific choices made by specific entities — in retrospect the stream of the latter on this blog over the last year has been tedious.

International Apartheid

For example: Abolish Foreignness, Do we have any scrap of evidence that [the Chinese Exclusion Act] made us better off?, and Opposing “illegal” immigration is xenophobic, or more bluntly, advocating for apartheid “because it’s the law”. I hinted at a subtheme about the role of cities, to be filled out later.

The system is grossly unjust and ought be abolished, about that I have no doubt. Existing institutions and arrangements must adapt. But, two doubts about my approach:

  • Too little expression of empathy with those who assume the goodness of current policy. Fear of change, competition, “other” are all deep. Too little about how current unjust system can be unwound in a way the mitigates any reality behind these fears. Too little about how benefits attributed to current unjust system can be maintained under a freedom respecting regime. (This doubt also applies to the intellectual freedom theme.)
  • Figuring out development might be more feasible, and certainly would have more impact on human welfare, individual autonomy, than smashing the international apartheid system. Local improvements to education, business, governance, are what all ought focus on — though development economics has a dismal record, it at least has the right target. Migration is a sideshow.

As with the intellectual freedom theme, these doubts merit engagement, and such will strengthen the case for freedom. But even moreso than in the case of intellectual freedom infringing regimes, the unconscionable and murderous injustice of the international apartheid regime must be condemned from the rooftops. It is sickening and unproductive to allow discourse on this topic to proceed as if the regime is anything but an abomination, however unfeasible its destruction may seem in the short term.

Politics

Although much of what I write here can be deemed political, one political theme not subsumed by others is inadequate self-regulation of the government “market”, e.g., What to do about democratically elected terrorist regimes, Suppose they gave a war on terror and a few exposed it as terror, and Why does the U.S. federal government permit negative sum competition among U.S. states and localities?

The main problem with this theme is omission rather than doubt — no solutions proposed. Had I done so, I’d have plenty to doubt.

Refutation

I fell behind, doing refuting only posts from first and second quarters of 2005. My doubt about this enjoyable exercise is that it is too contrived. Many of the refutations are flippant and don’t reflect any real doubts or knowledge gained in the last 8 years. That doubt is what led me to the exercise of this post. How did I do?

Blog indie radio static

Monday, December 23rd, 2013

If you still blog on your own site, read Jeffery Zeldman’s encomium and leave a comment.

As mentioned previously, the IndieWeb movement is bringing blog culture and technology forward. Watch Kevin Marks’ talk (slides).

Tantek Çelik is the person to follow, e.g., a recent post with essential history.

The IndieWeb movement is tiny. I’m merely a fan. WordPress (I use the software for this blog, but the wordpress.com service is at least as important) has done far more than any other entity/project to keep recognizable blogging relatively popular. For better or worse though, WordPress-based innovation seems to largely be in the direction of tackling various Content Management System problems, and following various trends in blog-like/competitor software/services, e.g., media sharing. Viewed in a really uncharitable light, wordpress.com is competing largely by bringing the features of a silo to blogging, rather than improving the technology and culture for independent website publishing/blogging. On the flipside, the ubiquity of WordPress probably makes it the most important software for further development of the IndieWeb.

On net the dominance of WordPress is probably good, but I also want to see more crazy blog/IndieWeb software, crazy meaning taking a very different approach rather than copying WordPress without its ecosystem. For example, remember the “bliki” concept? (Of course many implementations exist, ikiwiki being fairly popular, at least viewed through the lens of Planet Debian.) A few months ago there was a thread that touched on blogging within MediaWiki. Some of the posts (which I haven’t bothered to look up) said that MediaWiki makes commenting difficult. My reaction is that needs to be fixed anyway!

Another blog technology (and a bit of culture) development of note is use of a revision control system (usually git, usually public; wikis provide a facsimile of this; WordPress stores revisions, but those are never public and I find hard to use for anything other than a first tier backup/recovery) to write/manage/publish posts, usually associated with publishing a static site/blog. I find this compelling, but as far as I know IndieWeb/blog technology beyond feeds is underdeveloped for any static site generator (e.g., a popular one).

Jason Kottke recently wrote The blog is dead, long live the blog, which includes some bits I wasn’t fully aware of…about “social media”:

Twitter is coming to resemble radio news as media outlets repost the same stories throughout the day, ICYMI (in case you missed it).

The only mega-tweeter I follow (actually on pump.io) is Glyn Moody. I noticed Moody recently started reposting the same stories multiple times. I find this pretty annoying. Note I highly recommend following Moody; one of the few people I know of who follows closely and comments intelligently on all varieties of knowledge commoning (and beyond), something I find sorely lacking in the world. Fortunately Moody publishes his tweets for each day on a blog. So now I’m following him with a blog feed reader.

I have to imagine self-reposting and general “optimization” of tweeting will lead Twitter down the path Facebook has taken, ordering posts by “importance” rather than recency. Maybe that’d be good for readers, but grants the silos more power. IndieWebber Ben Werdmuller writes “in the future we can each have our own algorithms.” Hopefully.

I occasionally blog about blogging in my blogs category. As far as I know my would-be contribution to blog culture, self-refutation, has not been copied. I intend to add a variation, perhaps annual thematic doubt, which would be far less daunting than individual post refutation.

Greatest month in history?

Tuesday, December 17th, 2013

Yesterday, 11 years ago, today, 22 years and 4 months. Recently I noticed an observation in slides by Glyn Moody on Open Acccess (related editorial):

25 August 1991 – Finnish student, Linus Torvalds, announced the start of Linux
23 August 1991 – World Wide Web released publicly
14 August 1991 – Launch of arXiv

Moody titled the slide with above items “greatest week in history?” — arXiv is listed as 19 August, which I think must be a transcription error. Still, perhaps the greatest month in some assessment which grants something like knowledge commons supreme importance; perhaps future conventional wisdom. Those three are a nice mix of software, protocols, literature, data, and infrastructure.

collapsed broadcast towerThe world’s tallest broadcast tower collapsed 8 August 1991 to make way for somewhat less centralized communications.

Linux and the Web make Wikipedia’s short list of August 1991 events, which is dominated by the beginning of the final phase of the dissolution of the Soviet Union. (I have an old post which is a tiny bit relevant to tying this all together, however unwarranted that may be.)

arXiv isn’t nearly as well known to the general public as Linux, which isn’t nearly as well known as the Web. In some ways arXiv is still ahead of its time. The future takes a long time to be distributed — Moody’s cover slide is titled “half a revolution”. Below I’ve excepted a few particularly enjoyable paragraphs and footnotes from It was twenty years ago today… by arXiv founder Paul Ginsparg (who, Moody notes, knew of GNU via a brother). I’ve bolded a couple phrases and added one link for additional entertainment value. The whole 9 page paper (PDF) is worth a quick read (I can’t help but notice and enjoy the complete absence of two words: “copyright” and “license”).

The exchange of completed manuscripts to personal contacts directly by email became more widespread, and ultimately led to distribution via larger email lists.13 The latter had the potential to correct a significant problem of unequal access in the existing paper-preprint distribution system. For purely practical reasons, authors at the time used to mail photocopies of their newly minted articles to only a small number of people. Those lower in the food chain relied on the beneficence of those on the A-list, and aspiring researchers at non-elite institutions were frequently out of the privileged loop entirely. This was a problematic situation, because, in principle, researchers prefer that their progress depends on working harder or on having some key insight, rather than on privileged access to essential materials.

By the spring of 1991, I had moved to the Los Alamos National Laboratory, and for the first time had my own computer on my desk, a 25 MHz NeXTstation with a 105 Mb hard drive and 16 Mb of RAM. I was thus fully cognizant of the available disk and CPU resources, both substantially larger than on a shared mainframe, where users were typically allocated as little as the equivalent of 0.5 Mb for personal use. At the Aspen Center for Physics, in Colorado, in late June 1991, a stray comment from a physicist, concerned about emailed articles overrunning his disk allocation while traveling, suggested to me the creation of a centralized automated repository and alerting system, which would send full texts only on demand. That solution would also democratize the exchange of information, leveling the aforementioned research playing field, both internally within institutions and globally for all with network access.

Thus was born xxx.lanl.gov,18 initially an automated email server (and within a few months also an FTP server), powered by a set of csh scripts.19 It was originally intended for about 100 submissions per year from a small subfield of high-energy particle physics, but rapidly grew in users and scope, receiving 400 submissions in its first half year. The submissions were initially planned to be deleted after three months, by which time the pre-existing paper distribution system would catch up, but by popular demand nothing was ever deleted. (Renamed in late 1998 to arXiv.org, it has accumulated roughly 700,000 total submissions [mid Aug 2011], currently receives 75,000 new submissions per year, and serves roughly one million full text downloads to about 400,000 distinct users per week. The system quickly attracted the attention of existing physics publishers, and in rapid succession I received congenial visits from the editorial directors of both the American Physical Society (APS) and Institute of Physics Publishing (IOPP) to my little 10’x10’ office. It also had an immediate impact on physicists in less developed countries, who reported feeling finally in the loop, both for timely receipt of research ideas and for equitable reading of their own contributions. (Twenty years later, I still receive messages reporting that the system provides to them more assistance than any international organization.)

In the fall of 1992, a colleague at CERN emailed me: ‘Q: do you know the worldwide-web program?’ I did not, but quickly installed WorldWideWeb.app, serendipitously written by Tim Berners-Lee for the same NeXT computer that I was using, and with whom I began to exchange emails. Later that fall, I used it to help beta-test the first US Web server, set up by the library at the Stanford Linear Accelerator Center for use by the high-energy physics community.

Not everyone appreciated just how rapidly things were progressing. In early 1994, I happened to serve on a committee advising the APS about putting Physical Review Letters online. I suggested that a Web interface along the lines of the xxx.lanl.gov prototype might be a good way for the APS to disseminate its documents. A response came back from another committee member: “Installing and learning to use a WorldWideWeb browser is a complicated and difficult task — we can’t possibly expect this of the average physicist.”

13The most significant of these was maintained by Joanne Cohn, then a postdoctoral associate at the IAS Princeton, who manually collected and redistributed preprints (originally in the subject area of matrix models of two dimensional surfaces) to what became a list of over a hundred interested researchers, largely younger postdocs and grad students. This manual methodology provided an important proof of concept for the broader automated and archival system that succeeded it, and her distribution list was among those used to seed the initial hep-th@xxx.lanl.gov userbase.

18The name xxx was derived from the heuristic I’d used in marking text in TeX files for later correction (i.e., awaiting a final search for all appearances of the string ‘xxx’, which wouldn’t otherwise appear, and for which I later learned the string ‘tk’ is employed by journalists, for similar reasons).

19The csh scripts were translated to Perl starting in 1994, when NSF funding permitted actual employees.

(the rest)

[Semi]Commons Coordinations & Copyright Choices 4.0

Monday, December 9th, 2013

CC0 is superior to any of the Creative Commons (CC) 4.0 licenses, because CC0 represents a superior policy (public domain). But if you’re unable or unwilling to upgrade to CC0, the CC 4.0 licenses are a great improvement over the 3.0 licenses. The people who did the work, led by Diane Peters (who also led CC0), many CC affiliates (several of whom were also crucial in making CC0 a success), and Sarah Pearson and Kat Walsh, deserve much praise. Bravo!

Below read my idiosyncratic take on issues addressed and not addressed in the 4.0 licenses. If that sounds insufferable, but you want to know about details of the 4.0 licenses, skip to the excellent version 4 and license versions pages on the CC wiki. I don’t bother linking to sections of those pages pertinent to issues below, but if you want detailed background beyond my idiosyncratic take on each issue, it can be found there.

Any criticism I have of the 4.0 licenses concerns policy choices and is not a criticism of the work done or people involved, other than myself. I fully understand that the feasible choices were and are highly constrained by previous choices and conditions, including previous versions of the CC licenses, CC’s organizational history, users of CC licenses, and the overall states of knowledge commons and info regulation and CC’s various positions within these. I always want CC and other “open” organizations to take as pro-commons of a stance as possible, and generally judge what is possible to be further than that of the conventional wisdom of people who pay any attention to this scene. Sometimes I advocated for more substantial policy changes in the 4.0 licenses, though just as often I deemed such advocacy futile. At this point I should explain that I worked for CC until just after the 4.0 licenses process started, and have consulted a bit on 4.0 licenses issues since then as a “fellow”. Not many people were in a better position to influence the 4.0 licenses, so any criticisms I have are due to my failure to convince, or perhaps incorrect decision to not try in some cases. As I’ve always noted on this blog, I don’t represent any organization here.

Desiderata

Pro-commons? As opposed to what? The title of the CC blog post announcing the formal beginning of work on the new licenses:

Copyright Experts Discuss CC License Version 4.0 at the Global Summit

My personal blog post:

Commons experts to develop version 4.0 of the CC licenses

The expertise that CC and similar organizations ought to bring to the world is commons coordination. There are many copyright experts in the world, and understanding public copyright licenses, and drafting more, are no great intellectual challenges. The copyright expertise needed to do so ought be purely instrumental, serving the purpose of commons coordination. Or so I think.

Throughout CC’s existence, it has presented itself, and been perceived as, to varying extents, an organization which provides tools for copyright holders to exercise their copyrights, and an organization which provides tools for building a commons. (What it does beyond providing tools adds another dimension, not unrelated to “copyright choice” vs. “commons coordination”; there’s some discussion of these issues in a video included in my personal post above.)

I won’t explain in this post, but I think the trend through most of CC’s history has been very slow movement in the “commons coordination” direction, and the explicit objectives of the 4.0 versioning process fit that crawl.

“Commons coordination” does not directly imply the usual free/open vs. proprietary/closed dichotomy. I think it does mostly fall out that way, in small part due to “license interoperability” practicalities, but probably mostly because I think the ideal universal copyregulation policy corresponds to the non-discriminatory commons that “free/open” terms and communities carve out on a small scale, including the pro-sharing policy that copyleft prototypes, and excluding any role for knowledge enclosure, monopoly, property, etc. But it is certainly possible, indeed usual, to advocate for a mixed regime (I enjoy the relatively new term “semicommons”, but if you wish to see it everywhere, try every non-demagogic call for “balance”), in which case [semi]commons tools reserving substantial exclusivity (e.g., “commercial use”) make perfect sense for [semi]commons coordination.

Continuing to ignore the usual [non-]open dichotomy, I think there still are a number of broad criteria for would-be stewards of any new commons coordinating license (and make no mistake, a new version of a license is a new license; CC introduced 6 new licenses with 4.0) to consider carefully, and which inform my commentary below:

  • Differentiation: does the new license implement some policy not currently available in existing licenses, or at least offer a great improvement in implementation (not to provide excuses for new licenses, but the legal text is just one part of implementation; also consider branding/positioning, understandability, and stewardship) of policy already available?
  • Permissions: does the new license grant all permissions needed to realize its policy objective?
  • Regulation: how does the license’s policy objective model regulation that ought be adopted at a wider scale, e.g., how does it align with usual “user rights” and “copyright reform” proposals?
  • Interoperability: is the new license maximally compatible with existing licenses, given the constraints of its policy objectives, and indeed, to the expense of its immediate policy objectives, given that incompatibility, non-interoperability, and proliferation must fragment and diminish the value of commons?
  • Cross-domain impact: how does the license impact license interoperability and knowledge sharing across fields/domains/communities (e.g., software, data, hardware, “content”, research, government, education, culture…)? Does it further silo existing domains, a tragedy given the paucity of knowledge about governing commons in the world, or facilitate sharing and collaboration across domains?

Several of these are merely a matter of good product design and targeting, and would also apply to an organization that really had a primary goal of offering copyright holders additional choices the organization deems are under-provided. I suspect there is plenty of room for innovation in “copyright choice” tools, but I won’t say more in this post, as such have little to do with commons, and whatever CC’s history of copyright choice rhetoric and offering a gaggle of choices, creating such tools is distant from its immediate expertise (other than just knowing lots about copyright) and light years from much of its extended community.

Why bother?

Apart from amusing myself and a few others, why this writeup? The CC 4.0 licenses won’t change, and hopefully there won’t be CC 4.1 or 4.5 or 5.0 licenses for many years. Longevity was an explicit goal for 4.0 (cf. 1.0: 17 months, 2.0: 12 months; 2.5: 20 months; 3.0: 81 months). Still, some of the issues covered here may be interesting to people choosing to use one of the CC 4.0 licenses, and people creating other licenses. Although nobody wants more licenses, often called license proliferation, as an end in itself, many more licenses is the long term trend, of which the entire history of CC is just a part. Further, more licenses can be a good, to the extent they are significantly different from and better than, and as compatible as possible with, existing licenses.

To be totally clear: many new licenses will be created and used over the next 10 years, intended for various domains. I would hope, some for all domains. Proliferators, take heed!

Development tools

A 4.0 wiki page and a bunch of pages under that were used to lay out objectives, issues and options for resolution, and link to drafts. Public discussion was on the cc-licenses list, with tangential debate pushed to cc-community. Drafts and changes from previous drafts were published as redlined word processor files. This all seems to have worked fairly well. I’d prefer drafts as plain text files in a git repository, and an issue tracker, in addition to a mailing list. But that’s a substantially different workflow, and word processor documents with track changes and inline comments do have advantages, not limited to lawyers being familiar with those tools.

100% wiki would also work, with different tradeoffs. In the future additional tools around source repositories, or wikis, or wikis in source repositories, will finally displace word processor documents, but the tools aren’t there yet. Or in the bad future, all licenses will be drafted in word processors in the cloud.

(If it seems that I’m leaving a a lot out, e.g., methodology for gathering requirements and feedback, in-person and teleconferences, etc., I merely have nothing remotely interesting to say, and used “tools” rather than “process” to narrow scope intentionally.)

Internationalization

The 4.0 licenses were drafted to be jurisdiction neutral, and there will be official, equivalent, verbatim language translations of the licenses (the same as CC0, though I don’t think any translations have been made final yet). Legal “porting” to individual jurisdictions is not completely ruled out, but I hope there will be none. This is a wholly positive outcome, and probably the most impactful change for CC itself (already playing out over the past few years, e.g., in terms of scope and composition of CC affiliates), though it is of small direct consequence to most users.

Now, will other license drafters and would-be drafters follow CC’s lead and stop with the vanity jurisdiction license proliferation already?

Databases

At least the EU, Mexico, Russia, and South Korea have created “database rights” (there have been attempts in other jurisdictions), copyright-like mechanisms for entities that assemble databases to persecute others who would extract or copy substantial portions of said databases. Stupid policies that should be abolished, copyright-like indeed.

Except for CC0 and some minor and inconsistent exceptions (certain within-EU jurisdiction “port” versions), CC licenses prior to 4.0 have not “covered” database rights. This means, modulo any implied license which may or may not be interpreted as existing, that a prior-to-4.0 (e.g., CC-BY-3.0) licensee using a database subject to database restrictions (when this occurs is a complicated question) would have permission granted by the licensor around copyright restrictions, but not around database restrictions. This is a pretty big fail, considering that the first job of a public license is to grant adequate permissions. Actual responses to this problem:

  • Tell all database publishers to use CC0. I like this, because everyone should just use CC0. But, it is an inadequate response, as many will continue to use less permissive terms, often in the form of inadequate or incompatible licenses.
  • Only waive or license database restrictions in “ports” of licenses to jurisdictions in which database restrictions exist. This is wholly inadequate, as in the CC scheme, porting involves tailoring the legal language of a license to a jurisdiction, but there’s no guarantee a licensor or licensee in such jurisdictions will be releasing or using databases under one of these ports, and in fact that’s often not the case.
  • Have all licenses waive database restrictions. This sounds attractive, but is mostly confusing — it’s very hard to discern when only database and not copyright restrictions apply, such that a licensee could ignore a license’s conditions — and like “tell database publishers to use CC0” would just lead many to use different licenses that do purport to conditionally license database rights.
  • Have all licenses grant permissions around database restrictions, under whatever conditions are present in the license, just like copyright.

I think the last is the right approach, and it’s the one taken with the CC 4.0 licenses, as well as by other licenses which would not exist but for CC 3.0 licenses not taking this approach. I’m even more pleased with their generality, because other copyright-like restrictions are to be expected (emphasis added):

Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

The exclusions of 2(b)(1)-(2) are a mixed bag; see moral and personality rights, and patents below.

CC0 also includes a definition with some generality:

Copyright and Related Rights include, but are not limited to, the following:

  1. the right to reproduce, adapt, distribute, perform,
    display, communicate, and translate a Work;
  2. moral rights retained by the original author(s) and/or
    performer(s);
  3. publicity and privacy rights pertaining to a person’s
    image or likeness depicted in a Work;
  4. rights protecting against unfair competition in regards
    to a Work, subject to the limitations in paragraph 4(a),
    below;
  5. rights protecting the extraction, dissemination, use and
    reuse of data in a Work;
  6. database rights (such as those arising under Directive
    96/9/EC of the European Parliament and of the Council of 11
    March 1996 on the legal protection of databases, and under
    any national implementation thereof, including any amended
    or successor version of such directive); and
  7. other similar, equivalent or corresponding rights
    throughout the world based on applicable law or treaty, and
    any national implementations thereof.

As does GPLv3:

“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

Do CC0 and CC 4.0 licenses cover semiconductor mask restrictions (best not to use for this purpose anyway, see patents)? Does GPLv3 cover database restrictions? I’d hope the answer is yes in each case, and if the answer is no or ambiguous, future licenses further improve on the generality of restrictions around which permissions are granted.

There is one risk in licensing everything possible, and culturally it seems, specifically in licensing database rights — the impression that licensee which do so ‘create obligations’ related to those rights. I find this an odd way to think of a conditional permission as the creation of an obligation, when the user’s situation without said permission is unambiguously worse, i.e., no permission. Further, this impression is a problem for non-maximally-permissive licenses around copyright, not only database or other copyright-like rights.

In my opinion the best a public license can do is to grant permissions (conditionally, if not a maximally permissive license) around restrictions with as much generality as possible, and expressly state that a license is not needed (and therefore conditions to not apply) if a user can ignore underlying restrictions for some other reason. Can the approach of CC version 4.0 licenses to the latter be improved?

For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.

These are all trivialities for license nerds. For publishers and users of databases: Data is free. Free the data!

Moral and personality rights

CC 4.0 licenses address them well:

Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.

To understand just how well, CC 3.0 licenses say:

Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Adaptations or Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author’s honor or reputation. Licensor agrees that in those jurisdictions (e.g. Japan), in which any exercise of the right granted in Section 3(b) of this License (the right to make Adaptations) would be deemed to be a distortion, mutilation, modification or other derogatory action prejudicial to the Original Author’s honor and reputation, the Licensor will waive or not assert, as appropriate, this Section, to the fullest extent permitted by the applicable national law, to enable You to reasonably exercise Your right under Section 3(b) of this License (right to make Adaptations) but not otherwise.

Patents and trademark

Prior versions were silent, CC 4.0 licenses state:

Patent and trademark rights are not licensed under this Public License.

Perhaps some potential licensor will be reassured, but I consider this unnecessary and slightly harmful, replicating the main deficiency of CC0. The explicit exclusion makes it harder to see an implied license. This is especially troublesome when CC licenses are used in fields in which patents can serve as a barrier. Software is one, for which CC has long disrecommended use of CC licenses largely because software is already well-covered by licenses with which CC licenses are mostly incompatible with; the explicit patent exclusion in the CC 4.0 licenses makes them even less suitable. Hardware design is another such field, but one with fragmented licensing, including use of CC licenses. CC should now explicitly disrecommend using CC licenses for hardware designs and declare CC-BY-SA-4.0 one-way compatible with GPLv3+ so that projects using one of the CC-BY-SA licenses for hardware designs have a clear path to a more appropriate license.

Patents of course can be licensed separately, and as I pointed out before regarding CC0, there could be curious arrangements for projects using such licenses with patent exclusions, such as only accepting contributions from Defensive Patent License users. But the better route for “open hardware” projects and the like to take advantage of this complementarity is to do both, that is use a copyright and related rights license that includes a patent peace clause, and join the DPL club.

DRM

CC 4.0 licenses:

The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures.

This is a nice addition, which had been previously suggested for CC 3.0 licenses and rejected — the concept copied from GPLv3 drafts at the time. I would have preferred to also remove the limited DRM prohibition in the CC licenses.

Attribution

The CC 4.0 licenses slightly streamline and clarify the substance of the attribution requirement, all to the good. The most important bit, itself only a slight streamlining and clarification of similar in previous versions:

You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.

This pulls in the wild use from near zero to-the-letter compliance to fairly high.

I’m not fond of the requirement to remove attribution information if requested by the licensor, especially accurate information. I don’t know whether a licensor has ever made such a request, but that makes the clause only pointless rather than harmful. Not quite though, as it does make for a talking point.

NonCommercial

not primarily intended for or directed towards commercial advantage or private monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.

Not intended to be a substantive change, but I’ll take it. I’d have preferred a probably more significantly narrowed definition and a re-branding so as to increase the range of and differentiation among the licenses that CC stewards. But at the beginning of the 4.0 licenses process, I expected no progress, so am not disappointed. Branding and other positioning changes could come post-launch, if anyone is so inclined.

I think the biggest failure of the range of licenses with an NC term (and there are many preceding CC) is not confusion and pollution of commons, very roughly the complaints of people who would like NC to have a more predictable meaning and those who think NC offers inadequate permissions, respectively, but lack of valuable use. Licenses with the NC term are certainly used for hundreds of millions of photos and web pages, and some (hundreds of?) thousands of songs, videos, and books, but few where either the licensor or the public gains significant value above what would have been achieved if the licensor had simply offered gratis access (i.e., put stuff on the web, which is incredibly valuable even with no permissions granted). As far as I know, NC licenses haven’t played a significant role in enabling (again, relative to gratis access) any disruptive product or policy, and their use by widely recognized artists and brands is negligible (cf. CC-BY-SA, which Wikipedia and other mass collaboration projects rely on to exist, and CC-BY and CC0, which are part of disruptive policy mandates).

CC is understandably somewhat stuck between free/open norms, which make licenses with the NC an embarrassment, and their numerically large but low value uses. A license steward or would-be steward that really believed a semicommons license regime could do much more would try to break out of this rut by doing a complete rethink of the product (or that part of the product line), probably resulting in something much more different from the current NC implementation than the mere definitional narrowing and rebranding that I started out preferring. This could be related to my commentary on innovation in “copyright choice” tools above; whether the two are really the same thing would be a subject for inquiry.

NoDerivatives

If there were licenses that should not have been brought to version 4.0, at least not under the CC brand, it would have been CC-BY-NC-ND and CC-BY-ND.

Instead, an express permission to make derivatives so long as they are not shared was added. This change makes so-called text/content/data mining of any work under any of the CC version 4.0 licenses unambiguously permitted, and makes ND stick out a tiny bit less as an aberration from the CC license suite modeling some moderate copyright reform baseline.

There are some costs to this approach: surprise that a “no derivatives” license permits derivatives, slight reduction in scope and differentiation among licenses that CC stewards, giving credence to ND licenses as acceptable for scholarship, and abetting the impression that text/content/data mining requires permission at all. The last is most worrisome, but (as with similar worries around licensing databases) can be turned into a positive to the extent CC and everyone knowledgeable emphasizes that you ought not and probably don’t need a license; we’re just making sure you have the freedoms around CC licensed works that you ought to have anyway, in case the info regulation regime gets even worse — but please, mine away.

ShareAlike

This is the most improved named (BY/NC/ND/SA) elements in CC 4.0 licenses, and the work is not done yet. But first, I wish it had been improved even more, by making more uses unambiguously “trigger” the SA provision. This has been done once, starting in 2.0:

For the avoidance of doubt, where the Work is a musical composition or sound recording, the synchronization of the Work in timed-relation with a moving image (“synching”) will be considered a Derivative Work for the purpose of this License.

The obvious next expansion would have been use of images (still or moving) in contextual relation to other material, eg illustrations used in a text. Without this expansion, CC-BY-SA and CC-BY-NC-SA are essentially identical to CC-BY and CC-BY-NC respectively for the vast majority of actual “reuse” instances. Such an expansion would have substantially increased the range of and differentiation among licenses that CC stewards. The main problem with such an expansion (apart from specifying it exactly) would be increasing the cost of incompatibility, where texts and images use different licenses. This problem would be mitigated by increasing compatibility among copyleft licenses (below), or could be eliminated by broadening the SA licensing requirement for uses triggered by expansion, eg any terms granting at least equivalent permissions, such that a CC-BY-SA illustration could still be used in a text licensed under CC-BY or CC0. Such an expansion did not make the cut, but I think together with aforementioned broadening of licensing requirements, such a modulation (neither strictly “stronger” nor “weaker”) would make for an interesting and heretofore unimplemented approach to copyleft, in some future license.

Apart from a subtle improvement that brings SA closer to a full “or later versions” license, and reflects usual practice and understanding (incidentally, “no sublicensing” in non-SA licenses remains pointless, is not to be found in most non-CC permissive licenses, and should not be replicated), the big improvements in CC 4.0 licenses with the SA element are the addition of the potential for one-way compatibility to CC-BY-SA, adding the same compatibility mechanism to CC-BY-NC-SA, and discussions with stewards of potentially compatible licenses which make the realization of compatibility more likely. (I would have included a variation on the more complex but in my view elegant and politically advisable mechanism introduced in MPL 2.0, which allows for continued use under the donor compatible license as long as possible. Nobody demanded such, so not adding the complexity was perhaps a good thing.)

I hope that in 2014 CC-BY-SA-4.0 will be declared bilaterally compatible with the Free Art License 1.3, or if a new FAL version is required, it is being worked on, with achieving bilateral compatibility as a hard requirement, and more importantly, that CC-BY-SA-4.0 is declared one-way compatible (as a donor) with GPLv3+. An immediate step toward those ends will be finalizing an additional statement of intent regarding the stewardship of licenses with the ShareAlike element.

Though I’ll be surprised if any license appears as a candidate for compatibility with CC-BY-NC-SA-4.0, adding the mechanism to that license is a good thing: as a matter of general license stewardship, reducing the barriers to someone else creating a better NC license (see above), and keeping “porting” completely outside the 4.0 license texts (hopefully there will be no porting, but if there is any, compatibility with the international versions in licenses with the SA element would be exclusively via the compatibility mechanism used for any potentially compatible license).

Tech

All license clauses have id attributes, allowing direct linking to a particular clause. These direct links are used for references within the licenses. These are big usability improvements.

I would have liked to see an expansive “tech” (including to some extent design) effort synchronized with the 4.0 licenses, from the practical (e.g., a canonical format for license texts, from which HTML, plain text, and others are generated; that may be HTML, but the current license HTML is inadequate for the task) to the impractical (except for increasing CC’s reputation, e.g., investigating whether any semantic annotation and structure, preferably building on existing research, would be useful, in theory, for the license texts, and possibly even a practical aid to translation), to testing further upgrades to the ‘legal user interface’ constituted by the license texts and “deed” summaries (e.g., combining these), to just bringing various CC tooling and documentation up to date with RDFa 1.1 Lite. But, some of these things could be done post-launch if anyone is so inclined, and my understanding is that CC has only a single technology person on staff, dedicated to creating other products, and most importantly, the ability to directly link to any license clause probably has more practical benefits than anything on my wishlist.

Readability

One of the best things about the CC 4.0 licenses is their increased understandability. This is corroborated by crude automated readability metrics below, but I suspect these do not adequately characterize the improvement, for they include three paragraphs of explanatory text not present in previous versions, probably don’t fully reflect the improvement of splitting hairball paragraphs into lists, and have no mechanism for accounting for how the improved usability of linking to individual clauses contributes to understandability.

CC-BY-NC-SA (the license with the most stuff in it, usually used as a drafting template for others) from version 1.0 through 4.0, including 4.0 drafts (lower numbers indicate better readability, except in the case of Flesch; Chars/(Flesch>=1) is my gross metric for how painful it is to read a document; see license automated readability metrics for an explanation):

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
39b2ef67be9e5b4e743e5269a31ad1691515eede CC-BY-NC-SA-1.0 10228 13.3 16.3 14.2 17.0 59.7 14.2 48.4 211
5800ac2d32e35ace035cdcae693423cd9ff5bb6f CC-BY-NC-SA-2.0 11927 13.3 16.2 14.7 17.1 60.0 14.4 47.0 253
e5f44c2df6b1391d1ddb6efb2db6f90670e4ae67 CC-BY-NC-SA-2.5 12013 13.1 16.0 14.6 16.9 59.6 14.2 47.7 251
a63b7e81e7b9e30df5d253aed1d2991af47992df CC-BY-NC-SA-3.0 17134 16.4 19.7 14.2 20.6 67.0 16.3 38.8 441
8b36c30ed0510d9ca9c69a2ef826b9fd52992474 by-nc-sa-4.0d1 12465 13.0 15.0 14.9 16.3 57.4 14.0 43.9 283
4a87c7af5cde7729e2e456ee0e8958f8632e3005 by-nc-sa-4.0d2 11583 13.1 14.8 14.2 16.8 56.2 14.4 44.7 259
bb6f239f7b39343d62440bff00de24da2b3d256f by-nc-sa-4.0d3 14422 14.1 15.8 15.1 18.2 61.0 15.4 38.6 373
cf5629ae38a745f4f9eca429f7b26af2e71eb109 by-nc-sa-4.0d4 14635 13.8 15.6 15.5 17.8 60.2 15.2 38.6 379
a5e1b9829fd287cbe255df71eb9a5aad7fb19dbc by-nc-sa-4.0d4v2 14808 14.0 15.8 15.5 18.0 60.6 15.2 38.1 388
887f9a5da675cf681421eab3ac6d61f82cf34971 CC-BY-NC-SA-4.0 14577 13.1 14.7 15.7 17.1 58.6 14.7 40.1 363

Versions 1.0 through 4.0 of each of the six CC licenses brought to version 4.0, and CC0:

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
74286ae0dfea38c489437bf659b209737945145c CC0-1.0 5116 16.2 19.5 15.0 19.5 66.3 15.6 36.8 139
c766cc6d5e63277e46a3d83c6254e3528082587b CC-BY-1.0 8867 12.6 15.5 14.1 16.4 57.8 13.8 51.3 172
bf23729bec8ffd0de4d319fb33395c595c5c762b CC-BY-2.0 9781 12.1 14.9 14.3 16.1 56.7 13.7 51.9 188
024bb6d37d0a17624cf532bd14fbd42e15c5a963 CC-BY-2.5 9867 11.9 14.7 14.2 15.8 56.3 13.6 52.6 187
20dc61b94cfe1f4ba5814b340095b4c3fa23e801 CC-BY-3.0 14956 16.1 19.4 14.1 20.4 66.1 16.2 40.0 373
00b29551deee9ced874ffb9d29379b92f1487045 CC-BY-4.0 13003 13.0 14.5 15.4 16.9 57.9 14.6 41.1 316
e0c4b13ec5f9b5702d2e8b88d98b803e07d65cf8 CC-BY-NC-1.0 9313 13.2 16.2 14.3 17.0 59.3 14.1 49.3 188
970421995789d2e8189bb12071ab838a3fcf2a1a CC-BY-NC-2.0 10635 13.1 16.1 14.6 17.2 59.5 14.4 48.1 221
08773bb9bc13959c6f00fd49fcc081d69bda2744 CC-BY-NC-2.5 10721 12.9 15.8 14.5 16.9 59.0 14.2 48.9 219
9639556280637272ace081949f2a95f9153c0461 CC-BY-NC-3.0 15732 16.5 19.9 14.1 20.8 67.2 16.4 38.7 406
afcbb9791897e1e2f949d9d56ba64164746e0828 CC-BY-NC-4.0 13520 13.2 14.8 15.6 17.2 58.6 14.8 39.8 339
9ab2a3818e6ccefbc6ffdd48df7ecaec25e32e41 CC-BY-NC-ND-1.0 8729 12.7 15.8 14.4 16.4 58.6 13.8 51.0 171
966c97357e3b529e9c8bb8166fbb871c5bc31211 CC-BY-NC-ND-2.0 10074 13.0 16.1 14.7 17.0 59.7 14.3 48.8 206
c659a0e3a5ee8eba94aec903abdef85af353f11f CC-BY-NC-ND-2.5 10176 12.8 15.9 14.6 16.8 59.2 14.2 49.3 206
ad4d3e6d1fb6f89bbd28a44e263a89430b575dfa CC-BY-NC-ND-3.0 14356 16.3 19.7 14.1 20.5 66.8 16.2 39.7 361
68960bdf512ff5219909f932b8a81fdb255b4642 CC-BY-NC-ND-4.0 13350 13.3 14.8 15.7 17.2 58.4 14.8 39.4 338
39b2ef67be9e5b4e743e5269a31ad1691515eede CC-BY-NC-SA-1.0 10228 13.3 16.3 14.2 17.0 59.7 14.2 48.4 211
5800ac2d32e35ace035cdcae693423cd9ff5bb6f CC-BY-NC-SA-2.0 11927 13.3 16.2 14.7 17.1 60.0 14.4 47.0 253
e5f44c2df6b1391d1ddb6efb2db6f90670e4ae67 CC-BY-NC-SA-2.5 12013 13.1 16.0 14.6 16.9 59.6 14.2 47.7 251
a63b7e81e7b9e30df5d253aed1d2991af47992df CC-BY-NC-SA-3.0 17134 16.4 19.7 14.2 20.6 67.0 16.3 38.8 441
887f9a5da675cf681421eab3ac6d61f82cf34971 CC-BY-NC-SA-4.0 14577 13.1 14.7 15.7 17.1 58.6 14.7 40.1 363
e4851120f7e75e55b82a2c007ed98ffc962f5fa9 CC-BY-ND-1.0 8280 12.3 15.5 14.3 16.1 57.9 13.6 52.4 158
f1aa9011714f0f91005b4c9eb839bdb2b4760bad CC-BY-ND-2.0 9228 11.9 14.9 14.5 15.8 56.9 13.5 52.7 175
5f665a8d7ac1b8fbf6b9af6fa5d53cecb05a1bd3 CC-BY-ND-2.5 9330 11.8 14.7 14.4 15.6 56.5 13.4 53.2 175
3fb39a1e46419e83c99e4c9b6731268cbd1591cd CC-BY-ND-3.0 13591 15.8 19.2 14.1 20.0 65.6 15.9 41.2 329
ac747a640273815cf3a431be0afe4ec5620493e3 CC-BY-ND-4.0 12830 13.0 14.4 15.4 16.9 57.6 14.6 40.7 315
dda55573a1a3a80d294b1bb9e1eeb3a6c722968c CC-BY-SA-1.0 9779 13.1 16.1 14.2 16.8 59.1 14.0 49.5 197
9cceb80d865e52462983a441904ef037cf3a4576 CC-BY-SA-2.0 11044 12.5 15.3 14.4 16.2 57.9 13.8 50.2 220
662ca9fce7fed61439fcbc27ca0d6db0885718d9 CC-BY-SA-2.5 11130 12.3 15.0 14.4 16.0 57.5 13.6 50.9 218
4a5bb64814336fb26a9e5d36f22896ce4d66f5e0 CC-BY-SA-3.0 17013 16.4 19.8 14.1 20.5 67.2 16.2 38.9 437
8632363dcc2c9fc44f582b14274259b3a35744b2 CC-BY-SA-4.0 14041 12.9 14.4 15.4 16.8 57.8 14.5 41.4 339

It’s good for automated readability metrics that from 3.0 to 4.0 CC-BY-SA is most improved (the relevant clause was a hairball paragraph; CC-BY-NC-SA should have improved less, as it gained the compatibility mechanism) and CC-BY-ND is least improved (it gained express permission for private adaptations).

Next

I leave a list of recommendations (many already mingled in or implied by above) to a future post. But really, just use CC0.

Upgrade to CC-BY(-(NC(-(ND|SA))?|ND|SA))?-4\.0

Monday, November 25th, 2013

Today Creative Commons released version 4.0 of six* of its licenses, with many improvements over version 3.0, after more than two years of work. I’ll write more about those details later. But you should skip right past 4.0 and upgrade to CC’s premier legal product, CC0. This is the case whether you’re looking to adopt a CC license for the first time, or to upgrade from version 1.0, 2.0, 2.1, 2.5, or 3.0.

Let’s review the named conditions present in some or all of the CC 4.0 licenses, and why unconditional CC0 is better.

Don’t forget unmitigated © in the basement.

Attribution (BY). Do not take part in the debasement of attribution, and more broadly, provenance, already useful to readers, communities of practice, and publishers, by making them seem mere objects of copyright license compliance. If attribution is useful, it will be provided. If not, robots will find out. Rarely does anyone comply with the exact legal requirements of the attribution term anyway, and as a licensor, you probably won’t provide the information needed by licensees to easily comply. Plus, the corresponding icon looks like a men’s bathroom sign.

NonCommercial (NC). Sounds nice, but nobody knows what it means. Perhaps this goes some way to explaining why NC licensed works are often used by for-profit entities, including with advertising, while NC licensed works are verboten for many community and non-profit projects, most prominently Wikipedia and other Wikimedia projects. (Because commercial entities know there is very low risk of being sued for non-compliance, and can manage risk, while community projects tend to draw and follow bright lines. Perhaps community projects ought to be able to manage risk, and that they can’t is a demonstration of their relative lack of institutional sophistication…but that’s another topic!)

NoDerivatives (ND). This term has no business being in the “Creative Commons” license suite, but sadly still is. If you don’t want to contribute to a creative commons, don’t. If you’d like to, but think copyright (through withholding permission to share adaptations, i.e., the ND term) will prevent people from misrepresenting you, you’re wrong, committing an act of hate toward free speech, and undermining the potential of voluntary license practice to align with and support an obvious baseline objective for copyright reform: noncommercial sharing and remix should always be legal.

ShareAlike (SA). Also sounds nice, and I am a frequent apologist and sometime advocate for the underlying idea, copyleft. But SA is a weak implementation of copyleft. It isn’t “triggered” by the most common use of CC-licensed material (contextual illustration, not full remix), and it has no regulatory condition not present in non-SA CC licenses (cf GPL, which requires sharing source for a work, and is usable for any work; if you care about copyleft, tell CC to finish making CC-BY-SA one-way compatible with GPL). And the SA implementation retains the costs of copyleft: blank stares of incomprehension, even from people who have worked in the “open” world for over a decade, and occasionally intense fear and dislike (the balance is a bit different in the software world, but this is my direct experience among non-software putatively open organizations and people); also, compatibility problems. It’s time to take the unsolicited advice often given to incumbents and others fearful of the internet: ‘obscurity is a greater threat than piracy’ — and apply it: ‘obscurity is a greater threat than proprietarization.’


Upgrade to CC0!

CC0 isn’t perfect, but it is by far the best tool provided by CC. I have zero insight into the future of the CC organization, but I hope it gives ample priority to the public domain, post-4.0 launch.

*CC-BY(-(NC(-(ND|SA))?|ND|SA))?-4\.0 is a regular expression matching all six licenses released today.

Hierarchy of mechanisms for limiting copyright and copyright-like barriers to use of Public Sector Information, or More or Less Universal Government License(s)

Sunday, November 24th, 2013

This sketch is in part motivated by a massive proliferation of copyright and copyright-like licenses for government/public sector information, e.g., sub- and sub-sub-national jurisdiction licenses and sector- and jurisdiction-specific licenses intended to combat license proliferation within a sector within a jurisdiction. Also by longstanding concern about coordination among entities working to limit barriers to use of PSI and knowledge commons governance generally.

Everything following concerns PSI only relative to copyright and copyright-like barriers. There are other pertinent regulations and considerations to follow when publishing or using PSI (e.g., privacy and fraud; as these are pertinent even without copyright, it is silly and unnecessarily complicating to include them in copyright licenses) and other important ways to make PSI more useful technically and politically (e.g., open formats, focusing on PSI that facilitates accountability rather than openwashing).

Eliminate copyright and copyright-like restrictions

No longer barriers to use of PSI, because no longer barriers to use of information. May be modulated down to any general copyright or copyright-like barrier reduction, where the barrier is pertinent to use of PSI. Examples: eliminate sui generis database restrictions where they exist, increase threshold of originality required for information to be subject to copyright restriction, expand exceptions and limitations to copyright restrictions, expand affirmative user rights.

Eliminate copyright and copyright-like restrictions for PSI

For example, works produced by employees of the U.S. federal government are not subject to copyright restrictions in the U.S. Narrower exclusions from copyright restrictions (e.g., of laws, court rulings) are fairly common worldwide. These could be generalized to include eliminate copyright and copyright-like restrictions for PSI, worldwide, and expanded to include PSI produced by contractors or other non-government but publicly funded entities. PSI could be expanded to include any information produced with public funding, e.g., research and culture funded by public grants.

“Standard” international licenses for PSI

Public copyright licenses not specifically intended for only PSI are often used for PSI, and could be more. CC0 is by far the best such license, but other Creative Commons (CC) and Open Data Commons (ODC) licenses are frequently used. Depending on the extent to which the licenses used leave copyright and copyright-like restrictions in place (e.g., CC0: none; CC-BY-NC-ND, lots, thus considered non-open) and how they are applied (from legislative mandate for all PSI to one-off use for individual reports and datasets at discretion of agency), could have effect similar to eliminating copyright and copyright-like restrictions for PSI, or almost zero effect.

Universal Government License

Governments at various levels have chosen to make up their own licenses rather than use a standard international license. Some of the better reasons for doing so will be eliminated by the forthcoming version 4.0 of 6 of the CC licenses (though again, CC0 has been the best choice, since 2009, and will remain so). But some of the less good reasons (uncharitable characterization: vanity) can’t be addressed by a standard international license, and furthermore seem to be driving the proliferation of sub-sub-national licenses, down to licenses specific to an individual town.

Ideally this extreme license proliferation trend would terminate with mass implementation of one of the above options, though this seems unlikely in the short term. Maybe yet another standard license would help! The idea of an “open government license” which various governments would have a direct role in creating and stewarding has been casually discussed in the past, particularly several years ago when the current proliferation was just beginning, the CC 4.0 effort had not begun, and CC and ODC were not on the same page. Nobody is particularly incented to make this unwieldy project happen, but nor is it an impossibility — due to the relatively small world of NGOs (such as CC and the Open Knowledge Foundation, of which ODC is a project) and government people who really care and know about public licenses, and the possibility their collective exhaustion and exasperation over license details, incompatibility, and proliferation could reach a tipping point into collective action. There’s a lot to start from, including the research that went into CC-BY-4.0, and the OGL UK 2.0, which is a pretty good open license.

But why think small? How many other problems could be addressed simultaneously?

  • Defend the traditional meaning of ‘open government’ by calling the license something else, e.g., Universal/Uniform/Unified Government License.
  • Rallying point for public sector worldwide to commit more firmly and broadly to limiting copyright and copyright-like barriers to use of PSI, more rapidly establishing global norm, and leading to mandates. The one thing to be said for massive PSI license proliferation could be increased commitment from proliferating jurisdictions to use their custom licenses (I know of no data on this). A successful UGL would swamp any increased local commitment due to local vanity licenses through much higher level expectation and mandate.
  • Make the license work well for software (including being approved by the Open Source Initiative), as:
    • Generically “open” licenses are inevitably used for software, whether the steward apparently intends this (OGL UK 2.0) or does not (CC).
    • The best modern permissive license for software (Apache 2.0) is relatively long and unreadable for what it does, and has an discomfiting name (not nearly as bad as certain pro sports organizations, but still); it ought be superseded.
  • Ensure the license works for other domains, e.g., open hardware, which don’t really require domain-specific licenses, are headed down the path of proliferation and incompatibility, and that governments have obvious efficiency, regulatory, security, and welfare interests in.
  • Foster broader “open innovation community” engagement with government and public policy and vice versa, and more knowledge transfer across OIC domains, on legal instruments at the least.
  • Uniform Public License may be a better name than UGL in some respects (whatever the name, it ought be usable by the public sector, and the general public), but Government may be best overall, a tip of the hat to both the vision within governments that would be necessary to make the license succeed, and to the nature of copyright and copyright-like barriers as government regulatory regimes.

National jurisdiction licenses for PSI

A more likely mechanism for license proliferation deceleration and harm reduction in the near term is for governments within a national jurisdiction to use a single license, and follow various license stewardship and use best practices. Leigh Dodds recently blogged about the problem and highlighted this mechanism in a post titled The Proliferation of Open Government Licences.

Sub-national jurisdiction licenses for PSI

Each province/state and sub-jurisdiction thereof, down to towns and local districts, could use its own vanity license. This appears to be the trend in Canada. It would be possible to push further in this direction with multiple vanity licenses per jurisdiction, e.g., various licenses for various kinds of data, reports, and other materials.

Licenses for each PSI dataset or other work

Each and every government dataset or other publication could come with its own bespoke license. Though these licenses would grant permissions around some copyright and copyright-like restrictions, I suspect their net effect would be to heighten copyright and copyright-like restrictions as a barrier to both the use and publication of PSI, on an increased cost basis alone. This extreme highlights one of the downsides of copyright licenses, even unambiguously open ones — implementing, understanding, and using them can be seen as significant cost centers, creating an additional excuse for not opening materials, and encouraging the small number of people who really understand the mechanisms to be jealous and wary of any other reform.

None

Included for completeness.

Privatization of PSI copyright

Until now, I’ve assumed that copyright and copyright-like restrictions are barriers to use of PSI. But maybe there aren’t enough restrictions, or they aren’t allocated to the right entities, such that maximum value is realized from use of PSI. Control of copyright and copyright-like restrictions in PSI could be auctioned off to entities with the highest ability to extract rents from PSI users. These businesses could be government-owned, with various public-private partnerships in between. This would increase the direct contribution of PSI to GDP, incent the creation and publication of more PSI, ensure PSI is maintained and marketed, reaching citizens that can affordneed it, and provide a solid business model for Government 2.0, academia, cultural heritage, and all other publicly funded and publicly interested sectors, which would otherwise fail to produce an optimal level of PSI and related materials and innovations.

Do not let any of the above trick you into paying more attention to possible copyright and copyright-like barriers and licenses than actually doing stuff, especially with PSI, especially with “data”, doubly with “government data”.

I agree with Denny Vrandečić’s paradoxical sounding but correct directive:

Data is free. Free the data!

I tried to communicate the same in a chapter of the Data Journalism Handbook, but lacked the slogan.

Data is free. Free the data!

And what is not data? ☻

Addendum: Entirely by coincidence (in response to a European Commission consultation on PSI, which I had already forgotten about), today posts by Timothy Vollmer for the Communia Association and Creative Commons call out the license proliferation problem and endorse public domain as the default for PSI.