Post Economics

The jobs case for a police state

Saturday, September 13th, 2014

Partly to make up for not blogging on the issue in awhile (category), I recommend the Vox story/interview on the case for open borders. If a prediction can be sterile, sanguine, and desperate at one time, this is it:

My longer-run prediction is the world will have open borders once it doesn’t make much difference anymore. Once development has happened almost everywhere, and there are virtually no desperate backwaters left, that’s when countries will finally relent and say, “Fine. You can come here if you want,” and then they’ll open the borders, and then there will be very little migration. To me, a big point of open borders is just to fast forward to the world of the future where everyone can enjoy a First-World standard of living rather than making people wait 100 years.

I agree with the desperate part (waiting condemns billions to poverty and tyranny). But a big point of the international apartheid system to to ensure that there will always be desperate backwaters. Even if virtually everyone reaches a level of wealth at which migration declines, there will be disasters. Another big point of restrictions on movement, work, and living, particularly at borders but increasingly everywhere, is to impose on everyone (especially but not only migrants) police state controls/thug checkpoints.

My prediction (in US-centric terms, but applicable elsewhere) is that in 100 years ICE will be evaluated to have had far greater negative impact than the NSA, if they aren’t running things wholly that is. I predict this metastasizing apartheid enforcement apparatus will make justice impossible, even in the face of a massive shift in elite opinion (note mini-refutation) — like the drug war, not like marriage equality.

Regarding the title of this post: seriously, what could be politically better than creating jobs for citizens that protect jobs for citizens! It is surprising we don’t already have an ICE dictatorship. We don’t because the cost of regulation is too high. But it is coming down. Monitoring of all movement and required third party (ICE) approval of all economic arrangements are both getting cheaper every day.

Proprietary profitability as a key metric for open access and open source

Thursday, August 7th, 2014

Glyn Moody in Beyond Open Standards and Open Access:

Like open source, open access is definitely winning, even if there is some desperate rearguard action by the publishers, who are trying to protect their astonishing profit margins – typically 30-40%.

No doubt open source and open access have progressed, but the competition maintaining astonishing profit margins contradicts “definitely winning.” For publishing, see Elsevier, £0.8b profit on £2.1b revenue, and others. For software most pertinent to Moody’s post (concerning Open Document Format), see Microsoft’s business division, $16b profit on $24b revenue.

These profits coupled with the slow relative progress of open source and open access give proprietary vendors huge range to not only take “desperate rearguard action” but also to create new products and forms of lock-in with which the commons is continually playing catch-up.

We know what the commons “definitely winning” looks like — Linux (server software) and Wikipedia (encyclopedias) — and it includes proprietary vendor profit margins being crushed, most going out of business, and those remaining transitioning to service lines of business less predicated on privatized censorship.

When libraries begin mass cancellation of toll access journal subscriptions and organizations of all sorts cancel Microsoft, Adobe, and similar software subscriptions, then we can consider whether open access and open source are definitely winning. Until then the answer is definitely no.

As for what’s next for open standards and open access (Moody suggests further ODF mandates, which would be fine), the obvious answer is open source. It’s what allows realization of the promise of open standards, and the cancellation of Microsoft subscriptions. It’s also what’s next for academic publishing and everything else — what is not software will be obsolete — though cancellation of those toll access subscriptions is going to require going back to basics.

Free/open/commons advocates should consider destruction of proprietary competition profitability a key aim and metric of success or lack thereof, for both open products and policy. This metric has several benefits:

  • Indicates relative progress. Any non-moribund project/movement can make seeming progress, blind to different and potentially much greater progress by competition.
  • Implicates role of knowledge economy and policy in increasing or decreasing equality (of income and wealth, not just access).
  • Hard numbers, data readily available.
  • It’s reasonable to multiply destruction of proprietary profits when characterizing gains (so as to include decrease in deadweight loss).

Open policy for a secure Internet-N-Life

Saturday, June 28th, 2014

(In)Security in Home Embedded Devices Jim Gettys says software needs to be maintained for decades considering where it is being deployed (e.g., embedded in products with multi-decade lifetimes, such as buildings) and the criticality of some of that software, an unpredictable attribute — a product might become unplanned “infrastructure” for example if it is widely deployed and other things come to depend on it. Without maintenance, including deployment of updates in the field, software (and thus systems it is embedded in) becomes increasingly insecure as vulnerabilities are discovered (cites a honeymoon period enjoyed by new systems).

This need for long-term maintenance and field deployment implies open source software and devices that users can upgrade — maintenance needs to continue beyond the expected life of any product or organization. “Upgrade” can also mean “replace” — perhaps some kinds of products should be more modular and with open designs so that parts that are themselves embedded systems can be swapped out. (Gettys didn’t mention, but replacement can be total. Perhaps “planned obsolescence” and “throwaway culture” have some security benefits. I suspect the response would be that many things continue to be used for a long time after they were planned to be obsolete and most of their production run siblings are discarded.)

But these practices are currently rare. Product developers do not demand source from chip and other hardware vendors and thus ship products with “binary blob” hardware drivers for Linux kernel which cannot be maintained, often based on kernel years out of date when product is shipped. Linux kernel near-monoculture for many embedded systems, increasing security threat. Many problems which do not depend on hardware vendor cooperation, ranging from unintentionally or lazily not providing source needed for rest of system, to intentionally shipping proprietary software, to intentionally locking down device to prevent user updates. Product customers do not demand long-term secure devices from product developers. There is little effort to fund commons-oriented embedded development (in contrast with Linux kernel and other systems development for servers, which many big companies fund).

Gettys is focused on embedded software in network devices (e.g., routers) as network access is critical infrastructure much else depends on, including the problem at hand: without network access, many other systems cannot be feasibly updated. He’s working on CeroWrt a cutting edge version of OpenWrt firmware, either of which is several years ahead of what typically ships on routers. A meme Gettys wishes to spread, the earliest instance of which I could find is on cerowrt-devel, a harsh example coming the next week:

Friends don’t let friends run factory firmware.

Cute. This reminds me of something a friend said in a group discussion that touched on security and embedded in body (or perhaps it was mind embedded in) systems, along the lines of “I wouldn’t run (on) an insecure system.” Or malware would give you a bad trip.

But I’m ambivalent. Most people, thus most friends, don’t know what factory firmware is. Systems need to be much more secure (for the long term, including all that implies) as shipped. Elite friend advice could help drive demand for better systems, but I doubt “just say no” will help much — its track records for altering mass outcomes, e.g., with respect to proprietary software or formats, seems very poor.

In Q&A someone asked about centralized cloud silos. Gettys doesn’t like them, but said without long-term secure alternatives that can be deployed and maintained by everyone there isn’t much hope. I agree.

You may recognize open source software and devices that users can upgrade above as roughly the conditions of GPL-3.0. Gettys mentioned this and noted:

  • It isn’t clear that copyright-based conditions are effective mechanism for enforcing these conditions. (One reason I say copyleft is a prototype for more appropriate regulation.)
  • Of “life, liberty, and pursuit of happiness”, free software has emphasized the latter two, but nobody realized how important free software would be for living one’s life given the extent to which one interacts with and depends on (often embedded) software. In my experience people have realized this for many years, but it should indeed move to the fore.

Near the end Gettys asked what role industry and government should have in moving toward safer systems (and skip the “home” qualifier in the talk title; these considerations are at least as important for institutions and large-scale infrastructure). One answer might be in open policy. Public, publicly-interested, and otherwise coordinated funders and purchasers need to be convinced there is a problem and that it makes sense for them to demand their resources help shift the market. The Free Software Foundation’s Respects Your Freedom criteria (ignoring the “public relations” item) is a good start on what should be demanded for embedded systems.

Obviously there’s a role for developers too. Gettys asked how to get beyond the near Linux kernel monoculture, mentioning BSD. My ignorant wish is that developers wanting to break the monoculture instead try to build systems using better tools, at least better languages (not that any system will reduce the need for security in depth).

Here’s to a universal, secure, and resilient web and technium. Yes, these features cost. But I’m increasingly convinced that humans underinvest in security (not only computer, and at every level), especially in making sure investments aren’t theater or worse.

“Open policy” is the most promising copyright reform

Thursday, June 26th, 2014

Only a few days (June 30 deadline) for applications to the first Institute for Open Leadership. I don’t know anything about it other than what’s at the link, but from what I gather it involves a week-long workshop in the San Francisco area on open policy and ongoing participation in an online community of people promoting open policies in their professional capacities, and is managed by an expert in the field, Timothy Vollmer. Read an interview with Vollmer (wayback link to spare you the annoying list-gathering clickthrough at the original site, not least because its newsletter is an offender).

The institute and its parent Open Policy Network define:

Open Policy = publicly funded resources are openly licensed resources.

(Openly licensed includes public domain.)

Now, why open policy is the most promising knowledge regulation reform (I wrote “copyright” in the title, but the concept is applicable to mitigating other IP regimes, e.g., patent, and pro-commons regulation not based on mitigating IP):

  • Most proposed reforms (formalities can serve as an example for each mention following) merely reduce inefficiencies and embarrassments of freedom infringing regimes in ways that don’t favor commons-based production, as is necessary for sustainable good policy. Even if not usually conceptualized as commons-favoring, open policy is strongly biased in that direction as its mechanism is mandate of the terms used for commons-based production: open licenses. Most proposed reforms could be reshaped to be commons-favoring and thinking of how to do so a useful exercise (watch this space) but making such reshaping gain traction, as a matter of discourse let alone implementation, is a very long-term project.
  • The concept of open policy is scalable. There’s no reason as it gains credence to push for its expansion to everything receiving public or publicly interested support, including high and very low culture subsidy. At the extreme, the only way to avoid being subject to some open policy mandate would be to create restricted works in an IPer colony, isolated from the rest of humanity.
  • In order to make open policy gain much more credence than it has now, its advocates will be forced to make increasingly sophisticated public policy arguments to support claims that open policy “maximizes public investment” or to shift the object of maximization to freedom and equality. Most proposed reforms, because they would only reduce inefficiency and embarrassment, do not force much sophistication, leaving knowledge regulation discourse rotting in a trough where economists abandoned it over a century ago.
  • Open policy implementation has the potential to destroy the rents of freedom infringing industries. For sustainable good policy it is necessary to both build up the commons as an interest group and diminish interest groups that depend or think they depend on infringing freedom. It is possible for open policy to be gamed (e.g., hybrid journal double dipping). As troubling as that is, it seems to me that open policy flips which side is left desperately clawing for loopholes contrary to the rationale of policy. Most reform proposals at least implicitly take it as a given that public interest is the desperate side.
  • Open policy does not require any fundamental changes to national law or international treaties, meaning it is feasible, now. Hopefully a few reformists have generally grasped the no-brainer concept that a benefit obtained today is more valuable than one obtained in the future, e.g., in 95 years. It also doesn’t mean that open policy is merely a “patch” in contrast the “fixes” of most proposed reforms — which aren’t fixes anyway, but rather mitigations of the worst inefficiencies and embarrassments of freedom infringing regimes. If open policy is a patch, it is a one that helps the body of knowledge regulation to heal, by the mechanisms above (promoting commons production and discourse, diminishing freedom infringing interests).

In my tradition of critical cheering, consider the following Open Policy Network statement:

We have observed that current open policy efforts are decentralized, uncoordinated and insular; there is poor and/or sporadic information sharing.

As illustrated by the lack of the Open Source Definition or any software-centric organizations on Open Policy Network lists of its guiding principles and member organizations. Fortunately software is mentioned several times, for example:

If we are going to unleash the power of hundreds of billions of dollars of publicly funded education, research, data, and software, we need broad adoption of open policies.

Hopefully if the Open Policy Network is to become an important venue for moving open policy forward, people who understand software will get involved (by the way, one of the ways “publicly funded” is scalable is that it properly includes procurement, not only wholly funded new resources), e.g., FSFE and April. I know talking about software is scary — because it is powerful and unavoidable. But this makes it a necessity to include in any serious project to reform the knowledge economy and policy. Before long, everything that is not software or suffused with software will be obsolete.

API commons

Thursday, May 29th, 2014

Notes for panel The API Copyright Emergency: What’s Next? today at API Con SF. The “emergency” is the recent decision in Oracle v. Google, which I don’t discuss directly below, though I did riff on the ongoing case last year.

I begin with and come back to a few times Creative Commons licenses as I was on the panel as a “senior fellow” for that organization, but apart from such emphasis and framing, this is more or less what I think. I got about 80% of the below in on the panel, but hopefully still worth reading even for attendees.

A few follow-up thoughts after the notes.

Creative Commons licenses, like other public licenses, grant permissions around copyright, but as CC’s statement on copyright reform concludes, licenses “are not a substitute for users’ rights, and CC supports ongoing efforts to reform copyright law to strengthen users’ rights and expand the public domain.” In the context of APIs, default policy should be that independent implementation of an API never require permission from the API’s designer, previous implementer, or other rightsholder.

Without such a default policy of permission-free innovation, interoperability and competition will suffer, and the API community invites late and messy regulation at other levels intending to protect consumers from resulting lock-in.

Practically, there are things API developers, service providers, and API consumers can do and demand of each other, both to protect the community from a bad turn in default policy, and to go further in creating a commons. But using tools such as those CC provides, and choosing the right tools, requires looking at what an API consists of, including:

  1. API specification
  2. API documentation
  3. API implementations, server
  4. API implementations, client
  5. Material (often “data”) made available via API
  6. API metadata (e.g, as part of API directory)

(depending on construction, these could all be generated from an annotated implementation, or could each be separate works)

and what restrictions can be pertinent:

  1. Copyright
  2. Patent

(many other issues can arise from providing an API as a service, e.g., privacy, though those are usually not in the range of public licenses and are orthogonal to API “IP”, so I’ll ignore them here)

1-4 are clearly works subject to copyright, while 5 and 6 may or may not be (e.g., hopefully not if purely factual data). Typically only 3 and 4 might be restricted by patents.

Standards bodies typically do their work primarily around 1. Relatively open ones, like the W3C, obtain agreement from all contributors to the standard to permit royalty-free implementation of the standard by anyone, typically including a patent license and permission to prepare and perform derivative works (i.e., copyright, to extent such permission is necessary). One option you have is to put your API through an existing standards organization. This may be too heavyweight, or may be appropriate yet if your API is really a multi-stakeholder thing with multiple peer implementations; the W3C now has a lightweight community group venue which might be appropriate. The Open Web Foundation’s agreements allow you to take this approach for your API without involvement of an existing standards body​. Lawrence Rosen has/will talk about this.

Another approach is to release your API specification (and necessarily 2-4 to the extent they comprise one work, ideally even if they are separate) under a public copyright license, such as one of the CC licenses, the CC0 public domain dedication, or an open source software license. Currently the most obvious choice is the Apache License 2.0, which grants copyright permission as well as including a patent peace clause. One or more of the CC licenses are sometimes suggested, perhaps because specification and documentation are often one work, and the latter seems like a “creative” work. But keep in mind that CC does not recommend using its licenses for software, and instead recommends using an open source software licenses (such as Apache): no CC license includes explicit patent permission, and depending on the specific CC license chosen, it may not be compatible with software licenses, contrary to goal of granting clear permission for independent API implementation, even in the face of a bad policy turn.

One way to go beyond mitigating “API copyrightability” is to publish open source implementations, preferably production, though reference implementations are better than nothing. These implementations would be covered by whatever copyright and patent permissions are granted by the license they are released under — again Apache 2.0 is a good choice, and for software implementation CC licenses should not be used; other software licenses such as [A]GPL might be pertinent depending on business and social goals.

Another way to create a “thick” API commons is to address material made available via APIs, and metadata about APIs. There, CC tools are likely pertinent, e.g., use CC0 for data and metadata to ensure that “facts are free”, as they ought be in spite of other bad policy turns.

To get even thicker, consider the architecture, for lack of a better term, around API development, services, and material accessed and updated via APIs. Just some keywords: Linked Open Data, P2P, federation, Lots of Copies Keep Stuff Safe, collaborative curation.

The other panelists were Pamela Samuelson, Lawrence Rosen, and Annette Hurst, moderated by David Berlind.

I’m fairly familiar with Samuelson’s and Rosen’s work and don’t have comments on what they said on the panel. If you want to read more, I recommend among Samuelson’s papers The Strange Odyssey of Software Interfaces and Intellectual Property Law which shows that the “API copyright emergency” of the panel title is recurrent and intertwined with patent, providing several decades of the pertinent history up to 2008. Contrary to my expectation in the notes above, Rosen didn’t get a chance to talk about the Open Web Foundation agreements, but you can read his 2010 article Implementing Open Standards in Open Source which covers OWF.

Hurst is a lawyer for Orrick representing Oracle in the Oracle v. Google case, so understandably advocated for API copyright, but in the process made several deeply flawed assertions could have consumed the entire duration of the panel, but Berlind did a good job of keeping the conversation moving forward. Still, I want to mention two high level ones here, my paraphrases and responses:

Without software copyright the software economy would go away. This is refuted by software development not for the purposes of selling licenses (which is the vast majority of it), especially free/open source software development, and services (e.g., API provision, the source of which is often never published, though it ought be, see “going beyond” recommendations above). Yes the software economy would change, with less winner-take-all monopoly and less employment for Intellectual Parasite lawyers. But the software economy would be huge and very competitive. Software is eating the world, remember? One way to make it help rather than pejoratively eat the world is to eject the parasites along for the ride.

Open source can’t work without software copyright. This is refuted by 1) software source sharing before software copyright; 2) preponderance of permissively licensed open source software, in which the terms do not allow suing downstream developers who do not share back; 3) the difficulty of enforcing copyleft licenses which do allow for suing downstream developers who do not share back; 4) the possibility of non-copyright regulation to force sharing of source (indeed I see the charitable understanding of copyleft as prototyping such regulation; for perspective on the Oracle v. Google case from someone with a more purely charitable understanding of copyleft, see Bradley Kuhn); and 5) demand and supply mechanisms for mandating sharing of source (e.g., procurement policies, distribution policies such as Debian’s).

These came up because Hurst seemed to really want the audience to conflate software copyright in general (not at issue in the case, settled in a bad place since the early 1980s) and API copyright specifically. Regarding the latter, another point which could have been made is the extent to which free/open source software has been built around providing alternatives to proprietary software, often API-compatible. If API copyright could prevent compatible implementation without permission of any sort, open source, competition, and innovation would all be severely hampered.

There is a recent site called API Commons, which seems to be an API directory (Programmable Web, which ran the conference, also has one). My general suggestion to both would be to implement and facilitate putting all elements of APIs listed above in my notes in the commons. For example, they could clarify that API metadata they collect is in the public domain, publish it as Linked Open Data, and encourage API developers and providers they catalog to freely license specifications, documentation, implementations, and data, and note such in the directories.

In order to get a flavor for the conference, I listened to yesterday morning’s keynotes, both of which made valiant attempts to connect big picture themes to day to day API development and provision. Allow me to try to make connections back to “API commons”.

Sarah Austin, representing the San Francisco YMCA, pointed out that the conference is near the Tenderloin neighborhood, the poorest in central San Francisco. Austin asked if kids from the Tenderloin would be able to find jobs in the “API economy” or would they be priced out of the area (many tech companies have moved nearby in the last years, Twitter perhaps the best known).

Keith Axline claimed The Universe Is Programmable. We Need an API for Everything, or to some extent, that learning about the universe and how to manipulate it is like programming. Axline’s talk seemed fairly philosophical, but could be made concrete with reference to the Internet of Things, programmable matter, robots, nanobots, software eating the world … much about the world will indeed soon be software (programmable) or obsolete.

Axline’s conclusion was in effect largely about knowledge policy, including mourning energy wasted on IP, and observing that we should figure out public support for science or risk a programmable world dominated by IP. That might be part of it, but keeps the focus on funding, which is just where IP advocates want it — IP is an off-the-balance-sheets, “free” taking. A more direct approach is needed — get the rules of knowledge policy right, put freedom and equality as its top goals, reject freedom infringing regimes, promote commons (but mandating all these as a condition of public and publicly interested funding is a reasonable starting place) — given these objectives and constraints, then argue about market, government, or other failure and funding.

Knowledge policy can’t directly address the Austin’s concerns in the Tenderloin, but it does indirectly affect them, and over the long term tremendously affect them, in the Tenderloin and many other places. As the world accelerates its transition from an industrial to a knowledge dominated economy, will that economy be dominated by monopoly and inequality or freedom and equality? Will the former concentrations continue to abet instances of what Jane Jacobs called “catastrophic money” rushing into ill-prepared neighborhoods, or will the latter tendencies spread the knowledge, wealth, and opportunity?

Robot Gang Memorial Day

Monday, May 26th, 2014

I find gang violence memorials tacky and sad (not all in this style are gang-related, but apparently the pictured one is), but a comprehensible form of mourning and remembrance.

Though with much higher status participants, and somewhat higher production values, today’s Memorial Day (US) commemorations are similarly tacky and sad. But these big scale gang memorials are far inferior to small scale gang memorials. The latter at least often include exhortations to “stop violence”, gang violence of their sort is universally viewed as illegal, and the participants are understood as rather pathetic victims and victimizers who really ought to have done something better with their lives, products of culture, economy, governance (take your pick) that is broken and clearly ought be fixed — all true also of big scale gang violence.

I suggest that we stop memorializing large scale gang members as heroes before they are fully replaced by robots.

One step forward might be to end U.S. (and elsewhere) exploitation of uneducated teenage soldiers. But perhaps something else would be more feasible or effective. If conflict reduction bonds existed and you held a large stake in them, what would you do?

LWN.net original articles now BY-SA after a week

Thursday, May 1st, 2014

LWN.net started in 1998 as Linux Weekly News. Its coverage is broader now — Free/Open Source Software, and sometimes immediate neighbors, with in-depth coverage of Linux kernel and related system software development — and expert. It’s one of the few publications that I can read an article about a topic that I have in depth knowledge of and not then question whether all reporting on topics I don’t have in depth knowledge of is also that bad. Because LWN.net’s reporting is good (other readers I know seem to agree). LWN.net’s logo says “Linux info from the source”; I suspect the method implied (reading source, commits, mailing lists, talking to developers) explains the goodness.

I’ve poked fun at paywalls, but for a paywall, LWN.net’s is simple and well done: most new articles are subscriber-only for one week, and subscribers can generate a link to share a paywalled article with non-subscribers. The site does have ads, though disappointingly mostly Google AdSense. It is too bad such an in-depth industry publication doesn’t attract highly specific ads, like trade publications or even well done user group newsletters used to.

It seems that starting recently LWN.net releases its original articles under the CC-BY-SA-4.0 license after one week. As of this writing a week old article, a current article, another of more general interest, archive of author guidelines timestamped February 10 mentioning “possibly under a free license”, and today mentioning CC-BY-SA-4.0. I imagine that given LWN.net’s in-depth reporting, especially on the Linux kernel, some articles might actually be usefully incorporated into educational material, documentation, Wikipedia articles.

Subscribe, or occasionally read articles older than one week. Either would probably be good for your information diet.

Update 20140507: The ‘current’ articles above are now a week old, and CC-BY-SA-4.0 licensed.

Hyperlocal Optimum

Sunday, April 27th, 2014

I recently wanted to accuse some people of pursuing a hyperlocal optimum. In this case, a heightened perception of the strength of their position, sensed only by themselves. I thought better of it as there were more charitable interpretations of their actions, and a similar pejorative exists for this use case: reality distortion field.

But, I thought, what a great term! Google search/scholar/books shows it being used exactly once so far, 41 days ago by user pholling in a forum about Manchester, England (emphasis added):

To fix all of this is not a trivial bit of work, it will require that city regions and broader regions work together to aim for the overall optimum and not their hyperlocal optimum. London does this to a large extent, but no other place in the England does.

I have no assessment of the quote as I know next to nothing about urban policy in England, but urban policy is surely a field in which the term hyperlocal optimum could be heavily applied. I’m not going to claim any particular urban policy constitutes pursuit of hyperlocal optima (note locality geographic and temporal), and I’ll admit there exist charitable interpretations of many such unnamed policies. But consider that:

  • In the next few decades, over 2 billion more people will live in cities. Simple calculation based on projected ~2050 population (now: 7 billion, 2050: 9 billion) and urbanization (now: .5, 2050: .7) gives 2.8 billion more (now: 3.5 billion, 2050: 6.3 billion).
  • Robots (most obviously in transportation and construction) will reshape cities as profoundly this century as autos did in the last, beginning now.
  • There will be calamities. Hopefully fewer than in the last century, but planning ahead for cities’ role in preventing and surviving such is better than hoping.

Hyperlocal action is fine, but please think globally and long-term always, and modify actions accordingly to break out of pursuit of mere hyperlocal optima.

I’ve not explicitly defined what makes a local optimum a hyperlocal optimum. Perhaps the difficulty of doing so explains why they term has until now only been used once before in the subset of the universe Google has indexed. My first use above implies that “hyper” indicates the local optimum is perceived, but perhaps not really even the local optimum. My second use above implies “hyper” denotes something about either relative scale (the global optimum is much, much better) or qualitative difference (the global optimum considers totally different parameters from the ones considered for the hyperlocal optimum). Probably the term hyperlocal optimum has no good use. I may still use it again when I fail to avoid stooping to the pejorative.

Many problems of the dominant topic of this blog can be seen as ones of escaping local optima. Joining with the cities topic, individual cities and other entities’ ongoing lock-in to proprietary software is an example of a local optimum that might be escaped through coordination with other cities. I’m not sure when (assuming against the above, that the term has some value) to apply the hyper prefix to such situations (another such is library lock-in to proprietary journal subscription and groveling for proprietary book purchases). Suggestions?

I might avoid commenting on this years’ mayoral election for my locality, Oakland. If any of the candidates seriously talk about any of the above macro challenges and opportunities, I will be pleasantly surprised. I think that my handwaving predictions after the last (2011) election held up pretty well, mostly unfortunately.

Sum of all questions

Saturday, April 19th, 2014

I thoroughly enjoyed memesteader Gordon Mohr’s Quora & Wikipedia: Might one ever bail out the other? Futures of ‘Qworum’ or ‘WiQipedia’ which posits two futures in which the sites respectively decline mostly due to internal failure — essentially not adequately dealing with spam and unscrupulous behavior in both cases, though the spam and behavior is different for each.

Both futures seem plausible to me, inclusive of the decline and bail out in each. I also take the medium term absolute decline and death of Quora and steep relative decline of Wikimedia as likely. This relative assessment isn’t a knock on Quora — it and many others waiting in the wings can get big or fail — commons-based projects don’t have much experience in trying to do that (but need to, or find some other way to maintain long-term competitiveness).

Of course “waiting in the wings” is an understatement: I suspect the decline of both Quora and Wikimedia will be less due to internal failure than to being outcompeted by new entrants. Mohr has long been rumored to be working on one, but I imagine there must be many entrepreneurs dreaming of taking a chunk of Wikipedia traffic. I enjoy the Kill Hollywood request for startups, but Kill Wikipedia seems like a more plausible target for VC-term investment. (My preference is to target proprietary monopolies for destruction through competition, replacing them with commons; long ago I even imagined a financially leveraged/risk-seeking approach, but more feasible ones badly needed still.)

Go read and enjoy Mohr’s post, take it at least semi-seriously, and reflect on the future. Doing so makes me pine for something which does not yet exist: combinatorial prediction markets for everything.

I hadn’t looked at Quora in some time. I note that it still requires logging in to read, but has added Google — previously Facebook login (or not) was the only choice. There have been at least semi-serious explorations of a Wikimedia general Q&A sister project, but I’m not sure if any of them are listed in project proposals.

Patent reform, parts deficient in commons

Friday, April 18th, 2014

A Five Part Plan for Patent Reform (pdf) by Charles Duan, Director of Patent Reform at Public Knowledge, is simultaneously good and deficient:

  1. Notes theoretical and observed problems with monopoly incentive story underlying patents, mixed empirical results, regulatory cause of strong positive results in one field (pharma), layers of abuse surrounding core in implementation, the existence of many non-monopoly incentives for innovation, conflicts between these and patents … and yet fundamentally accepts the noble origin role of monopoly incentives in protecting apple pie and correlation with some inventions — nevermind causality or counterfactual. Compare text “certainly many inventions through history, such as the light bulb, the airplane, and the photocopier, were invented by small inventors and protected by patents” and its citation (footnote 7, The Myth of the Sole Inventor)!
  2. Discusses commons (Open Innovation Communities) as evidence, and substantially better than typical writing doing so, as at least a concept of pro-commons reform is included: “One task for patent reform, then, is to consider adjustments to the patent system that better accommodate these alternate incentives for innovation. The goal of such adjustments is to better encourage these inventors incentivized by factors other than patents, and to ensure that patents do not stand in the way of those inventors.” As usual, commons regimes carved out of property defaults are mentioned (specifically GPL and DPL), but not as prototypes for default policy. Also, “it is important for these decisionmakers to reach out to inventing communities, even those that do not file for patents, and it is important for those communities to reach out to the Patent Office and other decisionmakers.” I think this also holds for “IP scholars” (which of course ought re-imagine themselves as commons scholars) and OIC participants/commoners — let’s talk about what concrete reforms would favor actually existing commons, and put those on the scholarly and policy agendas. A recent idea directly concerning patents that ought start down that long road, but many pertinent reforms may be indirect, favoring commons in other ways so as to change the knowledge economy which eventually determines what interests dominate.
  3. Innovation is assumed the top goal of policy, tempered only by conflict among incentives to innovate, and need to rein in unscrupulous behavior. No mention of freedom and almost none of equality (Joseph Stiglitz is quoted: “The alternative of awarding prizes would be more efficient and more equitable”), let alone as goals which trump innovation.

These three good/deficient pairs are endemic in intellectual property-focused discourse, e.g., see my recent reviews of IP in a World Without Scarcity and Copyright and Inequality — one of the reasons the latter is so great is that places equality firmly on the agenda.

A few other notes on A Five Part Plan for Patent Reform:

  • It’s not a plan, rather an exploration of “five key areas in which the patent system is ripe for reform.” The word plan doesn’t even appear in the text. Well worth reading, but don’t expect to find an actionable plan with five parts.
  • Notes that patent trolls existed in the 1800s (individual farmers were bullied to pay royalties for farm implements covered by patents), which is good (too often current discourse assumes intellectual property worked just fine until recently, with conflict caused by changing technology rather than by power and rent seeking), but then: “Analogously, as discussed above, farm technology was widely used in the nineteenth century, and patents on farm technology were hotly contested. Patents on those farm tools were effectively abolished. But that fix to the patent system did not prevent the software patent problems faced today—it ultimately was a Band-Aid rather than a cure. The same would be true of eliminating software patents. The fundamental issue is that the technologies of tomorrow are unknown, so targeting patent reform to one specific field of technology means that the same problems will only arise again in a different technological sector.” Sure, only abolishing all patents is sufficient, but this analogy seriously undersells the benefit of abolishing software patents: agriculture then was in relative decline of importance in the face of industrialization. Now, software is ascendant, and any technology of tomorrow that matters will involve software.
  • Focuses on FRAND (fair, reasonable and non-discriminatory) licensing for standards. But RF (royalty free) licensing is required for any standard in which commons-based projects are first class participants (e.g., free/open source software and codec patents). No doubt unscrupulous behavior around FRAND and standards is a problem, but the solution is RF for standards.
  • From the Public Knowledge site, reading the paper requires first supplying an email address to a third party (gumroad). Annoying, but on par with PK’s newsletter practices (one of the many favoring tracking users at cost of usefulness to users). Better, the paper is released under CC-BY-SA, so I uploaded a copy to the Internet Archive. Best, Duan has published the paper’s LaTeX source.