Post Oakland

Somehow relating to and usually written from↑

Libby for Oakland mayor, Len for auditor

Saturday, November 1st, 2014

How I’m voting Tuesday for Oakland Mayor: #1 Libby Schaaf, #2 Rebecca Kaplan; Oakland Auditor: Len Raphael; California proposition 46: No, 47: Yes.

Mayor

In 2010 I ranked Kaplan first among a very weak field. Jean Quan won; my recommendations/predictions for her tenure still seem pretty reasonable.

I hope Schaaf wins this time for roughly four reasons:

  • Schaaf’s position on public safety seems nearest to my plea: more quality and quantity, with priority to former. That seems approximately the position of the other two likely winners (Kaplan and Quan); the real question is delivery. Progess has been way too slow under Quan, starting with many mistakes. With Kaplan I can’t tell where bluster about “leadership” ends and effectiveness begins, but I suspect it is mostly bluster. The other candidates with a nonzero chance of winning strike me as prioritizing quantity (Bryan Parker, Courtney Ruby, and especially Joe Tuman) or quality exclusively (Dan Siegel).
  • Schaaf’s approved plan for adjustable/benefit parking pricing in Montclair. Flexible pricing for parking is always a tough sell, but absolutely the right thing to do, and one of the most impactful and beneficial things a city can implement. I hope that Oakland catches up to and preferably leapfrogs SFPark and suspect that is most likely if Schaaf wins. Tuman is the worst on this issue, wanting to increase gratis parking.
  • Schaaf has done more work on ‘open’ government than any of the other candidates; my expectations for progress in that area would also be higher if Schaaf wins.
  • The other candidates have a tendency to come off as blowhards (especially Kaplan), incoherent (especially Quan), or some combination. For the field, Schaaf is on the low end of both of these negative characteristics. I expect fewer cringe-inducing moments from a Schaaf mayorship, a slightly good thing even if it has no correlation with effectiveness.

There is one issue that Kaplan, Quan, and Schaaf seem to approximately have the same position on, but Kaplan might be better: building a substantial amount of new housing. Kaplan has said that Oakland could have 100k more people. Not nearly enough, but a large number that I’m pleasantly surprised I have not seen criticism of.

I don’t think any of the candidates take a reasonable (kick them out) position on the embarrasment of professional sports teams in Oakland. I have not investigated their positions closely to avoid triggering disgust reaction, but my gloss: Parker and Tuman most likely to beggar the city to sports team owners, Kaplan extremely eager to claim credit for keeping sports teams, city hall plastered with cheesy corporate sports team banners under Quan, Siegel possibly least likely to totally sell-out to team owners.

Lots more about the Oakland mayor contest at OaklandWiki.

An issue I don’t think any candidate has addressed.

Auditor

Now for city auditor.

There are only two candidates for Oakland auditor. I’ll vote for Len Raphael, as I did when he ran for councilor in my district in 2012. Raphael would do politically unconfortable audits. Brenda Roberts would be business as usual. Especially for a city as poorly governed as Oakland, the former is necessary.

There hasn’t been much coverage of this contest, but a recent article in the East Bay Express seems like a neutral summary. Read more at OaklandWiki.

California Propositions

Ending the insane drug war continues to be the sure thing most governments could do to increase local and global peace and justice. There’s one California state proposition that would expand the drug war: 46: Medical Malpractice Lawsuits Cap and Drug Testing of Doctors (vote against), and one that would mitigate it 47: Reduced Penalties for Some Crimes Initiative (vote for).

Edit Oakland wiki events

Wednesday, July 9th, 2014

Saturday, July 12, there’s a big open streets event in my obscure flats neighborhood where Oakland, Emeryville, and Berkeley meet. A small stretch of San Pablo Avenue will be closed to cars (sadly not only human-driven cars, which would momentarily meet my suggestion). E’ville Eye has a comprehensive post about the event and its origins.

There will be an Oakland Urban Paths walk in the neighborhood during the event, during which obscurities will be related. Usually these walks are in locations with more obvious scenery (hills/stairs) and historical landmarks; I’m looking forward to seeing how they address Golden Gate. Last month they walked between West Oakland and downtown, a historic and potentially beautiful route that currently crosses 980 twice — edit it out!

Monday, July 14 18:00-19:30 there’s a follow-on event at the Golden Gate Branch Library — an OaklandWiki edit party. I haven’t edited Oakland Wiki much yet, but I like the concept. It is one of many LocalWikis, which relative to MediaWiki and Wikipedia have very few features or rules. This ought greatly lower the barrier to many more people contributing information pertinent to their local situation; perhaps someone is researching that? I’ve used the OaklandWiki to look up sources for Wikipedia articles related to Oakland and have noticed several free images uploaded to OaklandWiki that would be useful on Wikipedia.

Saturday, July 19 11:00-16:00 there’s a Wikipedia edit event at Impact Hub in Oakland and online: WikiProject Open Barn Raising 2014 which aims to improve Wikipedia articles about open education — a very broad and somewhat recursive (Wikipedia is an “open educational resource”, though singular doesn’t do it justice, unless perhaps made singular the open educational resource, but that would be an overstatement). If you’re interested in OER, Open Access, open policy and related tools and organizations, or would like to learn about those things and about editing Wikipedia, please participate!

Tangentially, OpenHatch (my endorsement) got a nice writeup of its Open Source Comes to Campus events at WIRED. I view these as conceptually similar to introduction to Wiki[pedia] editing events — all aim to create a welcoming space for newcomers to dive into participating in commons-based peer production — good for learning, careers, communities, and society.

Hyperlocal Optimum

Sunday, April 27th, 2014

I recently wanted to accuse some people of pursuing a hyperlocal optimum. In this case, a heightened perception of the strength of their position, sensed only by themselves. I thought better of it as there were more charitable interpretations of their actions, and a similar pejorative exists for this use case: reality distortion field.

But, I thought, what a great term! Google search/scholar/books shows it being used exactly once so far, 41 days ago by user pholling in a forum about Manchester, England (emphasis added):

To fix all of this is not a trivial bit of work, it will require that city regions and broader regions work together to aim for the overall optimum and not their hyperlocal optimum. London does this to a large extent, but no other place in the England does.

I have no assessment of the quote as I know next to nothing about urban policy in England, but urban policy is surely a field in which the term hyperlocal optimum could be heavily applied. I’m not going to claim any particular urban policy constitutes pursuit of hyperlocal optima (note locality geographic and temporal), and I’ll admit there exist charitable interpretations of many such unnamed policies. But consider that:

  • In the next few decades, over 2 billion more people will live in cities. Simple calculation based on projected ~2050 population (now: 7 billion, 2050: 9 billion) and urbanization (now: .5, 2050: .7) gives 2.8 billion more (now: 3.5 billion, 2050: 6.3 billion).
  • Robots (most obviously in transportation and construction) will reshape cities as profoundly this century as autos did in the last, beginning now.
  • There will be calamities. Hopefully fewer than in the last century, but planning ahead for cities’ role in preventing and surviving such is better than hoping.

Hyperlocal action is fine, but please think globally and long-term always, and modify actions accordingly to break out of pursuit of mere hyperlocal optima.

I’ve not explicitly defined what makes a local optimum a hyperlocal optimum. Perhaps the difficulty of doing so explains why they term has until now only been used once before in the subset of the universe Google has indexed. My first use above implies that “hyper” indicates the local optimum is perceived, but perhaps not really even the local optimum. My second use above implies “hyper” denotes something about either relative scale (the global optimum is much, much better) or qualitative difference (the global optimum considers totally different parameters from the ones considered for the hyperlocal optimum). Probably the term hyperlocal optimum has no good use. I may still use it again when I fail to avoid stooping to the pejorative.

Many problems of the dominant topic of this blog can be seen as ones of escaping local optima. Joining with the cities topic, individual cities and other entities’ ongoing lock-in to proprietary software is an example of a local optimum that might be escaped through coordination with other cities. I’m not sure when (assuming against the above, that the term has some value) to apply the hyper prefix to such situations (another such is library lock-in to proprietary journal subscription and groveling for proprietary book purchases). Suggestions?

I might avoid commenting on this years’ mayoral election for my locality, Oakland. If any of the candidates seriously talk about any of the above macro challenges and opportunities, I will be pleasantly surprised. I think that my handwaving predictions after the last (2011) election held up pretty well, mostly unfortunately.

Gov[ernance]Lab impressions

Friday, March 7th, 2014

First, two excerpts of my previous posts to explain my rationale for this one. 10 months ago:

I wonder the extent to which reform of any institution, dominant or otherwise, away from capture and enclosure, toward the benefit and participation of all its constituents, might be characterized as commoning?

Whatever the scope of commoning, we don’t know how to do it very well. How to provision and govern resources, even knowledge, without exclusivity and control, can boggle the mind. I suspect there is tremendous room to increase the freedom and equality of all humans through learning-by-doing (and researching) more activities in a commons-orientated way. One might say our lack of knowledge about the commons is a tragedy.

26:

Other than envious destruction of power (the relevant definition and causes of which being tenuous, making effective action much harder) and gradual construction of alternatives, how can one be a democrat? I suspect more accurate information and more randomness are important — I’ll sometimes express this very specifically as enthusiasm for futarchy and sortition — but I’m also interested in whatever small increases in accurate information and randomness might be feasible, at every scale and granularity — global governance to small organizations, event probabilities to empirically validated practices.

I read about the Governance Lab @ NYU (GovLab) in a forward of a press release:

Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people.

That sounded interesting, perhaps not deceivingly — as I browsed the site, open tabs accumulated. Notes on some of those follow.

GovLab’s hypothesis:

When institutions open themselves to diverse participation and collaborative problem solving, they become more effective and the decisions they make are more legitimate.

I like this coupling of effectiveness and legitimacy. Another way of saying politics isn’t about policy is that governance isn’t about effectiveness, but about legitimizing power. I used to scoff at the concept of legitimacy, and my mind still boggles at arrangements passing as “legitimate” that enable mass murder, torture, and incarceration. But our arrangements are incredibly path dependent and hard to improve; now I try to charitably consider legitimacy a very useful shorthand for arrangements that have some widely understood and accepted level of effectiveness. Somewhat less charitably: at least they’ve survived, and one can do a lot worse than copying survivors. Arrangements based on open and diverse participation and collaborative problem solving are hard to legitimate: not only do they undermine what legitimacy is often really about, it is hard to see how they can work in theory or practice, relative to hierarchical command and control. Explicitly tackling effectiveness and legitimacy separately and together might be more useful than assuming one implies the other, or ignoring one of them. Refutation of the hypothesis would also be useful: many people could refocus on increasing the effectiveness and legitimacy of hierarchical, closed systems.

If We Only Knew:

What are the essential questions that if answered could help accelerate the transformation of how we solve public problems and provide for public goods?

The list of questions isn’t that impressive, but not bad either. The idea that such a list should be articulated is great. Every project ought maintain such a list of essential questions pertinent to the project’s ends!

Proposal 13 for ICANN: Provide an Adjudication Function by Establishing “Citizen” Juries (emphasis in original):

As one means to enhance accountability – through greater engagement with the global public during decision-making and through increased oversight of ICANN officials after the fact – ICANN could pilot the use of randomly assigned small public groups of individuals to whom staff and volunteer officials would be required to report over a given time period (i.e. “citizen” juries). The Panel proposes citizen juries rather than a court system, namely because these juries are lightweight, highly democratic and require limited bureaucracy. It is not to the exclusion of other proposals for adjudicatory mechanisms.

Anyone interested in random selection and juries has to be at least a little interesting, and on the right track. Or so I’ve thought since hearing about the idea of science courts and whatever my first encounter with sortition advocacy was (forgotten, but see most recent), both long ago.

Quote in a quote:

“The largest factor in predicting group intelligence was the equality of conversational turn-taking.”

What does that say about:

  • Mailing lists and similar fora used by projects and organizations, often dominated by loudmouths (to say nothing of meetings dominated by high-status talkers);
  • Mass media, including social media dominated by power law winners?

Surely it isn’t pretty for the intelligence of relevant groups. But perhaps impetus to actually implement measures often discussed when a forum gets out of control (e.g., volume or flamewars) such as automated throttling, among many other things. On the bright side, there could be lots of low hanging fruit. On the dim side, I’m surely making extrapolations (second bullet especially) unsupported by research I haven’t read!

Coordinating the Commons: Diversity & Dynamics in Open Collaborations, excerpt from a dissertation:

Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism. […] To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia.

Interesting for a few reasons:

  • I like the title, cf. commons coordination (though I was primarily thinking of inter-project/movement coordination);
  • OpenHatchy;
  • I like the further inquiry’s usefulness for research and the researched community;
  • Improving the effectiveness of mass collaboration is important, including for its policy effects.

Back to the press release:

Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.

I hope those technology platforms will be open to audit and improvement by the public, i.e., free/open source software. GovLab’s site being under an open license (CC-BY-SA) could be a small positive indicator (perhaps not rising to the level of an essential question for anyone, but I do wonder how release and use of “content” or “data” under an open license correlates with release and use of open source software, if there’s causality in either direction, and if there could be interventions that would usefully reinforce any such).

I’m glad that NGOs are a target. Seems it ought be easier to adopt and spread governance innovation among NGOs (and businesses) than among governments, if only because there’s more turnover. But I’m not impressed. I imagine this could be due, among other things, to my ignorance: perhaps over a reasonable time period non-state governance has improved more rapidly than state governance, or to non-state governance being even less about effectiveness and more about power than is state governance, or to governance being really unimportant for survival, thus a random walk.

Something related I’ll never get around to blogging separately: the 2 year old New Ambiguity of ‘Open Government’ (summary), concerning the danger of allowing term to denote a government that publishes data, even merely politically insensitive data around service provision, rather than politically sensitive transparency and ability to demand accountability. I agree about the danger. The authors recommend maintaining distinctions between accountability, service provision, and adaptability of data. I find these distinctions aren’t often made explicit, and perhaps they shouldn’t be: it’d be a pain. But on the activist side, I think most really are pushing for politically sensitive transparency (and some focused on data about service provision might fairly argue such is often deeply political); certainly none want open data to be a means of openwashing. For one data point, I recommend the Oakland chapter of Beyond Transparency. Finally, Stop Secret Contracts seems like a new campaign entirely oriented toward politically sensitive transparency and accountability rather than data release. I hope they get beyond petitions, but I signed.

NFL IP

Sunday, October 6th, 2013

How the NFL Fleeces Taxpayers by Gregg Easterbrook is a fine article, adding to the not nearly large enough pile of articles criticizing the U.S. professional sports civic extortion racket. With a bonus explicit connection with copy regulation. I’ll quote just the directly relevant paragraphs:

Too often, NFL owners can, in fact, get away with anything. In financial terms, the most important way they do so is by creating game images in publicly funded stadiums, broadcasting the images over public airwaves, and then keeping all the money they receive as a result. Football fans know the warning intoned during each NFL contest: that use of the game’s images “without the NFL’s consent” is prohibited. Under copyright law, entertainment created in publicly funded stadiums is private property.

When, for example, Fox broadcasts a Tampa Bay Buccaneers game from Raymond James Stadium, built entirely at the public’s expense, it has purchased the right to do so from the NFL. In a typical arrangement, taxpayers provide most or all of the funds to build an NFL stadium. The team pays the local stadium authority a modest rent, retaining the exclusive right to license images on game days. The team then sells the right to air the games. Finally, the NFL asserts a copyright over what is broadcast. No federal or state law prevents images generated in facilities built at public expense from being privatized in this manner.

Baseball, basketball, ice hockey, and other sports also benefit from this same process. But the fact that others take advantage of the public too is no justification. The NFL’s sweetheart deal is by far the most valuable: This year, CBS, DirecTV, ESPN, Fox, NBC, and Verizon will pay the NFL about $4 billion for the rights to broadcast its games. Next year, that figure will rise to more than $6 billion. Because football is so popular, its broadcast fees would be high no matter how the financial details were structured. The fact that game images created in places built and operated at public expense can be privatized by the NFL inflates the amounts kept by NFL owners, executives, coaches, and players, while driving up the cable fees paid by people who may not even care to watch the games.

Easterbrook’s idea for reform also involves copy regulation (emphasis added):

The NFL’s nonprofit status should be revoked. And lawmakers—ideally in Congress, to level the national playing field, as it were—should require that television images created in publicly funded sports facilities cannot be privatized. The devil would be in the details of any such action. But Congress regulates health care, airspace, and other far-more-complex aspects of contemporary life; it can crack the whip on the NFL.

If football images created in places funded by taxpayers became public domain, the league would respond by paying the true cost of future stadiums—while negotiating to repay construction subsidies already received. To do otherwise would mean the loss of billions in television-rights fees. Pro football would remain just as exciting and popular, but would no longer take advantage of average people.

This idea would have many loopholes (team owners are excellent at extracting public subsidies even for “privately financed” stadiums), but would be a step forward. It is good to see the principle of public funding means public domain applied in new domains (it is as yet a mostly unrealized, but accepted by many activists, goal for domains such as public sector information, cultural heritage, and academic publication).

While on the topic, another mostly good recent article is Death of a sports town: What does a city lose when its pro teams leave? Oakland just might find out. Two caveats. A questionable story about a kid who sees a football player turned police officer as a role model. Any reliance on such a coincidence for role models shows just how badly Oakland and many other cities are policed — residents should be demanding performance and compliance from police such that most officers are obvious role models for youth. The article also repeats the specious claim that “pro sports are the city’s plumb line, cutting across class and race and elevation.”

While on that claim, Doug Whitfield republished my article, (original) with commentary on top:

I’m going to try something new today. Over at his blog, Mike Linksvayer dedicates his posts to the public domain. That means I don’t have to give attribution to his work, but obviously I’m doing so. I think he’s wrong that art brings all classes and cultures together. How many “red necks” or “thugs” do you see at the opera? How many women wearing Prada do you see enjoying the finer arts of graffiti or break-dancing? I also think he’s wrong about groceries. There are plenty of people that can’t afford to shop at Whole Foods (or choose not to because of their anti-union policies).

But that’s not the point. The point is that we as sports-enthusiasts need to highlight amateur athletics and player-owned and supporter-owned clubs to combat these stereotypes about athletics. Not all athletics are bad.

It is worth thinking about how sports can destroy communities and relationships though, even if you don’t think it’s happening in your life or even if the positives outweigh the negatives. Either way, please enjoy what is probably a different view than your own.

Whitfield is wrong about art and groceries. Yes, various forms and genres have fans concentrated with various demographics. But there are also huge and increasing crossovers, especially when it comes to popular art. It’s acceptable and unsurpriing for anyone to be a fan of anything. With regard to groceries, I know plenty of wealthy people who shop at Wal-Mart (or locally, Grocery Outlet) and plenty of poor people who shop at Whole Foods (or Berkeley Bowl), and even more who shop at all. Note the trend in both culture and shopping is exactly the opposite of stadium attendance — increased mixing vs increased stratification.

Whitfield is right about the point. Athletics is good. How can arrangements which do not destroy communities and increase inequality compete with the extortion racket?

Whitfield also republished a shorter article on pro sports civic extortion (original) of mine, and on another of his blogs, on post on the federated social web (original). I appreciate the experiment, which the latter is tiny bit relevant to, mentioning that blog technology (and culture) failed to compete with “social” silos, or failed to form the basis of the “social web”, depending on whether your glass is 90% empty or 10% full. One of the things blogs generally failed to compete on is “sharing” links, sometimes with brief commentary. One can do that with a blog of course, and people do, but it isn’t central to blogging.

Detroyalty

Tuesday, July 23rd, 2013
Crown Jewels
Crown jewels.

I’ve never been to Detroit, nor Michigan, unless you count transferring at the airport in the suburbs a couple times.

That surely qualifies me to come up with bandages for Detroit’s woes.

Establish a hereditary monarchy. It’ll boost tourism and non-crime media coverage. Whow? A lottery. Ticket sales will salve financial problems. There are some castle-like buildings available.

But a royal family is just the band. For the ages, turn the whole city into a museum. The top floor of the City Museum in Saint Louis has an intriguing collection of building adornments saved from demolitions in that city (which by the way lost 63% of its population from 1950-2010; Detroit lost only 61%). Detroit could improve on that by making the whole city a museum, with the royal castle and other estates and jewels as the main attractions. I expected to eventually immodestly propose that at least Jerusalem, probably all of Israel-Palestine, possibly a greater Holy Land encompassing the sites of major monotheistic religions (Utah would have to be an exclave/branch) be designated a museum to the worship of vengeful conceptions of god and the achievement of relative world peace, but hey, Detroit won.

I’ve already provided a complete set of bandages for Detroit, but in the spirit of folks proposing various regulatory holidays for Detroit, here’s a complimentary bonus that complements the above: prospectively eliminate all professional sports team liability for player injuries, suicides, and any other outcome, for 99 years. This will establish a sustainable revenue stream for the royal city/family/museum for a few generations as visitors pay handsomely to witness the NFL®-style American football they remember before it was driven to bankruptcy and banned.

Off with the crowns on their heads!

88% of the US urban population is in NYC

Tuesday, July 9th, 2013

The greatest concentration of the highest densities is in New York, which has 88 percent of the national population living at more than 25,000 per square mile (approximately 10,000 per square kilometer). Los Angeles ranked second at 3.5 percent and San Francisco ranks third at 3.2 percent (Figure 4).

This explains why everyplace in the US other than New York City feels a bit like a rural outpost.

No one, however, rationally believes that densities approximating anything 25,000 per square mile or above will occur, no matter how radical urban plans become.

The writer, Wendell Cox, must mean in the US, as far higher densities are being built elsewhere.

But why shouldn’t there be at least one other real city in the US? Before discarding as an irrational thought, consider how it could happen:

  1. Massive densification of an existing near-city. This does seem rather unlikely. As I’ve noted before, the population of San Francisco and Oakland would have to quadruple to be as dense as Manhattan and Brooklyn. Even with likely continued semi-dense infill development, and plausible recovery of lots of space for people via freeway demolition and robot cars, they would continue to be semi-urban.
  2. Massively dense near-greenfield (probably in an existing metro area) development. I gather this is happening all over China, but to happen in the US costs would have to go way down or demand unexpectedly go way up. The first could well occur through robot and other construction technology improvements, the second is not likely but ought to occur through the destruction of international apartheid.
  3. Mix of the first two: increased demand and decreased construction costs and space dedicated to cars allow at least one US city that isn’t NYC to do a huge amount of really dense infill development.

If there were to be a dense new city within an existing US metro area, where is most likely?

Which US city is the best candidate for achieving the third, mixed scenario?

(I very selectively quoted from the Cox post, which mostly focuses on 10,000 per square mile density. There are lots of comments on the post at Urbanophile, including those stating the obvious that 10k is not very dense at all.)

Open Knowledge Foundation

Wednesday, February 13th, 2013

I used to privately poke fun at the Open Knowledge Foundation for what seemed like a never-ending stream of half-baked projects (and domains, websites, lists, etc). I was wrong.

(I have also criticized OKF’s creation of a database-specific copyleft license, but recognize its existence is mostly Creative Commons’ fault, just as I criticize some of Creative Commons’ licenses but recognize that their existence is mostly due to a lack of vision on the part of free software activists.)

Some of those projects have become truly impressive (eg the Public Domain Review and CKAN, the latter being a “data portal” deployed by numerous governments in direct competition with proprietary “solutions”; hopefully my local government will eventually adopt the instance OpenOakland has set up). Some projects once deemed important seem relatively stagnant, but were way ahead of their time, if only because the non-software free/open universe painfully lags software (eg, KnowledgeForge). I haven’t kept track of most OKF projects, but whichever ones haven’t succeeded wildly don’t seem to have caused overall problems.

Also, in the past couple years, OKF has sprouted local groups around the world.

Why has the OKF succeeded, despite what seemed to me for a time chaotic behavior?

  • It knows what it is doing. Not necessarily in terms of having a solid plan for every project it starts, but in the more fundamental sense of knowing what it is trying to accomplish, grounded by its own definition of what open knowledge is (unsurprisingly it is derived from the Open Source Definition). I’ve been on the advisory council for that definition for most of its existence, and this year I’m its chair. I wrote a post for the OKF blog today reiterating the foundational nature of the definition and its importance to the success of OKF and the many “open” movements in various fields.
  • It has been a lean organization, structured to be able to easily expand and contract in terms of paid workers, allowing it to pursue on-mission projects rather than be dominated by permanent institutional fundraising.
  • It seems to have mostly brought already committed open activists/doers into the organization and its projects.
  • The network (eg local groups) seems to have grown fairly organically, rather than from a top-down vision to create an umbrella that all would attach themselves toview with great skepticism.

OKF is far from perfect (in particular I think it is too detached from free/open source software, to the detriment of open data and reducing my confidence it will continue to say on a fully Open course — through action and recruitment — one of their more ironic practices at this moment is the Google map at the top of their local groups page [Update: already fixed, see comments]). But it is an excellent organization, at this point probably the single best connection to all things Open, irrespective of field or geography.

Check them out online, join or start a local group, and if you’re interested in the minutiae of of whether particular licenses for intended-to-be-open culture/data/education/government/research works are actually open, help me out with OKF’s OpenDefinition.org project.

Open Data nuance

Sunday, October 7th, 2012

I’m very mildly annoyed with some discussion of “open data”, in part where it is an amorphous thing for which expectations must be managed, value found and sustainable business models, perhaps marketplaces, invented, all with an abstract and tangential relationship to software, or “IT”.

All of this was evident at a recent Open Knowledge Foundation meetup at the Wikimedia Foundation offices — but perhaps only evident to me, and I do not really intend to criticize anyone there. Their projects are all great. Nonetheless, I think very general discussion about open data tends to be very suboptimal, even among experts. Perhaps this just means general discussion is suboptimal, except as an excuse for socializing. But I am more comfortable enumerating peeves than I am socializing:

  • “Open” and “data” should sometimes be considered separately. “Open” (as in anyone can use for any purpose, as opposed to facing possible legal threat from copyright, database, patent and other “owners”, even their own governments, and their enforcement apparatuses) is only an expensive policy choice if pursued at too low a level, where rational ignorance and a desire to maintain every form of control and conceivable revenue stream rule. Regardless of “open” policy, or lack thereof, any particular dataset might be worthwhile, or not. But this is the most minor of my annoyances. It is even counterproductive to consider, most of the time — due to the expense of overcoming rational ignorance about “open” policy, and of evaluating any particular dataset, it probably makes a lot of sense to bundle “open data” and agitate for as much data to be made available under as good of practices as possible, and manage expectations when necessary.
  • To minimize the need to make expensive evaluations and compromises, open data needs to be cheap, preferably a side-effect of business as usual. Cheapness requires automation requires software requires open source software, otherwise “open data” institutions are themselves not transparent, are hostage to “enterprise software” companies, and are severely constrained in their ability to help each other, and to be helped by their publics. I don’t think an agitation approach is optimal (I recently attended an OpenOakland meeting, and one of the leaders said something like “we don’t hate proprietary software, but we do love open source”, which seems reasonable) but I am annoyed nevertheless by the lack of priority and acknowledgement given to software by “open data” (and even moreso, open access/content/education/etc) folk in general, strategic discussions (but, in action the Open Knowledge Foundation is better, having hatched valuable open source projects needed for open data). Computation rules all!
  • A “data marketplace” should not be the first suggestion, or even metaphor, for how to realize value from open data — especially not in the offices of the Wikimedia Foundation. Instead, mass collaboration.
  • Open data is neither necessary nor sufficient for better governance. Human institutions (inclusive of “public”, “private”, or any other categorization you like) have been well governed and atrociously governed throughout recorded history. Open data is just another mechanism that in some cases might make a bit of a difference. Another tool. But speaking of managing expectations, one should expect and demand good governance, or at least less atrocity, from our institutions, completely independent of open data!

“Nuance” is a vague joke in lieu of a useful title.

Diocese of Springfield, Illinois ©ensors criticism of its Bishop Paprocki

Sunday, October 7th, 2012

I recognize the rhetorical value of pointing out that copyright can be used for unambiguous censorship but I try to avoid doing so myself: “can be used for” downplays “is”. But the following is too good to let pass.

Bishop Paprocki: Voting Dem...This video is no longer available due to a copyright claim by Diocese of Springfield in Illinois.

Paprocki made a video sermon in which he says that voting Democrat puts one’s soul at risk, while disclaiming telling anyone how to vote.

Brian Tashman posted a criticism of Paprocki’s video, including (I surmise [Update: I was probably wrong; looking at the post again, I’m changing my guess to verbatim excerpt]) a video of himself on video criticizing Paprocki’s statements, including relevant excerpts of Paprocki’s video. I found Tashman’s post and video via a post titled This Week in God, where I noticed the embedded YouTube video frame said:

Bishop Paprocki: Voting Dem…This video is no longer available due to a copyright claim by Diocese of Springfield in Illinois.

I’m going to guess that Tashman’s use of the Paprocki video sermon was very clearly fair use. But even if the entire video was included verbatim, it’d be a zero diff parody. If you want to watch that, the original is linked above, and excerpted and uncut-with-but-grainy-with-additional-watermark versions posted by Paprocki fans remain on YouTube.

I don’t see how Paprocki’s statements could be electioneering, as nobody believes in eternal salvation or damnation, right? In case I’m wrong, some are using the opportunity to call for revoking the Diocese of Springfield’s tax extempt status.

(I grew up in Springfield, Illinois and heard they were getting a curious Catholic bishop last year, one who promotes exorcisms and says that sex abuse lawsuits are the work of the devil — not sex abuse, but lawsuits intended to redress the abuse. That’s Paprocki. But I’m not poking fun of Springfield. Salvatore Cordileone was recently promoted from Oakland bishop to San Francisco archbishop, shortly after demonstrating that the blood of Christ does intoxicate and can result in a DUI. Furthermore, I empathize with Paprocki. If I believed abortion were mass murder and homosexuality an abomination, I would feel compelled to risk mere tax benefits in order to tell people to vote against candidates who I perceived as being for murder and abomination. Indeed, I must tell you to not vote for Romney or Obama, as they both favor mass murder and abomination performed by the U.S. security state: murder, murder, murder, torture, and mass incarceration. But I’m rooting for Obama, as I suspect he favors a little less torture.)