Post Wikipedia

Social mobilization for the Internet post-epochals grew up with

Thursday, November 14th, 2013

Puneet Kishor has organized a book talk tomorrow (2013-11-15) evening in San Francisco by Edward Lee, author of The Fight for the Future: How People Defeated Hollywood and Saved the Internet–For Now (pdf).

I can’t attend, so I watched a recording of a recent talk by Lee and skimmed the book.

The book gives a narrative of the SOPA/PIPA and ACTA protests, nicely complementing Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate, which does what the title says by analyzing relevant posts and links among them.

Lee in the talk and book, and the authors of the mapping report, paint a picture of a networked, distributed, and dynamic set of activists and organizations, culminating in a day of website blackouts and millions of people contacting legislators, and street protests in the case of ACTA.

The mapping report puts the protests and online activity leading up to them in the context of debate over whether the net breeds conversations that are inane and silo’d, or substantive and boundary-crossing: data point for the latter. What does this portend for social mobilization and politics in the future? Unknown: (1) state or corporate interests could figure out how to leverage social mobilization as or more effectively than public interest actors (vague categories yes), (2) the medium itself (which now, a few generations have grown up with, if we allow for “growing up” to extend beyond high school) being perceived at risk may have made these protests uniquely well positioned to mobilize via the medium, or (3) this kind of social mobilization could tilt power in a significant and long-term way.

Lots of people seem to be invested in a version of (3). They may be right, but the immediate outcome makes me sad: the perceived cutting edge of activism amounts to repeated communications optimization, i.e., spam science. Must be the civil society version of “The best minds of my generation are thinking about how to make people click ads. That sucks.” This seems eminently gameable toward (1), in addition to being ugly. We may be lucky if (2) is most true.

On the future of “internet freedoms” and social mobilization, Lee doesn’t really speculate. In the talk Q&A, lack of mass protest concerning mass surveillance is noted. The book’s closing words:

“We tried not to celebrate too much because it was just a battle. We won a battle, not the war. We’re still fighting other free trade agreements and intellectual property enforcement that affect individual rights.”

In a way, the fight for digital rights had only just begun.

Of course my standard complaint about this fight, which is decades old at least, is that it does not consist merely of a series of rearguard battles, but also altering the ecosystem.

Economics and the Commons Conference [knowledge stream] report

Wednesday, October 30th, 2013

Economics and the Common(s): From Seed Form to Core Paradigm. A report on an international conference on the future of the commons (pdf) by David Bollier. Section on the knowledge stream (which I coordinated; pre-conference post) copied below, followed by an addendum with thanks and vague promises. First, video of the stream keynote (slides) by Carolina Botero (introduced by me; archive.org copy).

III. “Treating Knowledge, Culture and Science as Commons”

Science, and recently, free software, are paradigmatic knowledge commons; copyright and patent paradigmatic enclosures. But our vision may be constrained by the power of paradigmatic examples. Re-conceptualization may help us understand what might be achieved by moving most provisioning of knowledge to the commons; help us critically evaluate our commoning; and help us understand that all commons are knowledge commons. Let us consider, what if:

  • Copyright and patent are not the first knowledge enclosures, but only “modern” enforcement of inequalities in what may be known and communicated?
  • Copyright and patent reform and licensing are merely small parts of a universe of knowledge commoning, including transparency, privacy, collaboration, all of science and culture and social knowledge?
  • Our strategy puts commons values first, and views narrow incentives with skepticism?
  • We articulate the value of knowledge commons – qualitative, quantitative, ethical, practical, other – such that knowledge commons can be embraced and challenged in mainstream discourse?

These were the general questions that the Knowledge, Culture and Science Stream addressed.

Knowledge Stream Keynote Summary

Carolina Botero Cabrera, a free culture activist, consultant and lawyer from Colombia, delivered a plenary keynote for the Knowledge Stream entitled, “What If Fear Changes Sides?” As an author and lecturer on free access, free culture and authors’ rights, Botero focused on the role of information and knowledge in creating unequal power relationships, and how knowledge and cultural commons can rectify such problems.

“If we assume that information is power and acknowledge the power of knowledge, we can start by saying that controlling information and knowledge means power. Why does this matter?” she asked. “Because the control of information and knowledge can change sides. The power relationship can be changed.”

One of the primary motives of contemporary enclosures of information and knowledge, said Botero, is to instill fear in people – fear of violating copyright law, fear of the penalties for doing so. This inhibits natural tendencies to share and re-use information. So the challenge facing us is to imagine if fear could change sides. Can we imagine a switch in power relationships over the control of knowledge – how we produce, distribute and use knowledge? Botero said we should focus on the question: “How can we switch the tendency of knowledge regulation away from enclosure, so that commons can become the rule and not the exception?”

“There are still many ways to produce things, to gain knowledge,” said Botero, who noted that those who use the word “commons” [in the context of knowledge production] are lucky because it helps name these non-market forms of sharing knowledge. “In Colombia, we don’t even have that word,” she said.

To illustrate how customary knowledge has been enclosed in Colombia, Botero told the story of parteras, midwives, who have been shunted aside by doctors, mostly men, who then asserted control over women’s bodies and childbirth, and marginalized the parteras and their rich knowledge of childbirth. This knowledge is especially important to those communities in remote areas of Colombia that do not have access to doctors. There is currently a huge movement of parteras in Colombia who are fighting for the recognition of their knowledge and for the legal right to act as midwives.

Botero also told about how copyright laws have made it illegal to reproduce sheet music for songs written in 18th and 19th century Colombia. In those times, people simply shared the music among each other; there was no market for it. But with the rise of the music industry in the 20th century, especially in the North, it is either impossible or unaffordable to get this sheet music because most of it is copyrighted. So most written music in Colombia consists of illegally photocopied versions. Market logic has criminalized the music that was once natural and freely flowing in Colombian culture. Botero noted that this has increased inequality and diminished public culture.

She showed a global map illustrating which nations received royalties and fees from copyrights and patents in 2002; the United States receives more than half of all global revenues, while Latin America, Africa, India and other countries of the South receive virtually nothing. This is the “power relationships” that Botero was pointing to.

Botero warned, “We have trouble imagining how to provision and govern resources, even knowledge, without exclusivity and control.” Part of the problem is the difficulty of measuring commons values. Economists are not interested, she said, which makes it difficult to go to politicians and persuade them why libraries matter.

Another barrier is our reliance on individual incentives as core value in the system for regulating knowledge, Botero said. “Legal systems of ‘intellectual property’ place individual financial incentives at the center for knowledge regulation, which marginalizes commons values.” Our challenge is to find ways to switch from market logics by showing that there are other logics.

One reason that it is difficult to displace market logics is because we are reluctant or unable to “introduce the commons discourse from the front door instead of through the back door,” said Botero. She confessed that she herself has this problem because most public debate on this topic “is based on the premise that knowledge requires enclosure.” It is difficult to displace this premise by talking about the commons. But it is becoming increasingly necessary to do so as new policy regimes, such as the Transpacific Trade (TPP) Agreement, seek to intensify enclosures. The TPP, for example, seeks to raise minimum levels of copyright restriction, extend the terms of copyrights, and increase the prison terms for copyright violations.

One way to reframe debate, suggested Botero, is to see the commons “not as the absence of exclusivity, but the presence of non-exclusivity. Th is is a slight but important difference,” she said, “that helps us see the plenitude of non-exclusivity” – an idea developed by Séverine Dussolier, professor and director of the Revue Droit des Technologies de l’Information (RDTI, France). This shift “helps us to shift the discussion from the problems with the individual property and market-driven perspective, to a framework and society that – as a norm – wants its institutions to be generative of sharing, cooperation and equality.”

Ultimately, what is needed are more “efficient and effective ways to protect the ethic and practice of sharing,” or as she put it, “better commoning.” Reforming “intellectual property” is only one small part of the universe of knowledge commoning, Botero stressed. It also includes movements for “transparency, privacy, collaboration, and potentially all of science and culture.”

“When and how did we accept that the autonomy of all is subservient to control of knowledge by the few?” asked Botero. “Most important, can we stop this? Can we change it? Is the current tragedy our lack of knowledge of the commons?” Rediscovering the commons is an important challenge to be faced “if fear is going to change sides.”

An Account of the Knowledge, Culture and Science Stream’s Deliberations

There were no presentations in the Knowledge Stream breakout sessions, but rather a series of brief provocations. These were intended to spur a lively discussion and to go beyond the usual debates heard at free and open software/free culture/open science conferences. A primary goal of the breakout discussions was to consider what it means to regard knowledge as a commons, rather than as a “carve-out” exception from a private property regime. The group was also asked to consider how shared knowledge is crucial to all commoning activity. Notes from the Knowledge Stream breakout sessions were compiled through a participatory titanpad, from which this account is adapted.

The Knowledge Stream focused on two overarching themes, each taking advantage of the unique context of the conference:

  1. Why should commoners of all fields care about knowledge commons?
  2. If we consider knowledge first as commons, can we be more visionary, more inclusive, more effective in commoning software, science, culture, seeds … and much more?

The idea of the breakout session was to contextualize knowledge as a commons, first and foremost: knowledge as a subset of the larger paradigm of commons and commoning, as something far more than domain-specific categories such as software, scientific publication and educational materials.

An overarching premise of the Knowledge Stream was the point made by Silke Helfrich in her keynote, that all commons are knowledge commons and all commons are material commons. Saving seeds in the Svalbaard Seedbank are of no use if we forget how to cultivate them, for example, and various digital commons are ultimately grounded in the material reality of computers, electricity infrastructures and the food that computer users need to eat.

There is a “knowledge commons” at the center of each commons. This means that interest in a “knowledge commons” isn’t confined to those people who only care about software, scientific publication, and so on. It also means that we should refrain from classifying commons into categories such as “natural resources” and “digital,” and begin to make the process of commoning itself the focal point.

Of course, one must immediately acknowledge that digital resources do differ in fundamental ways from finite natural resources, and therefore the commons management strategies will differ. Knowledge commons can make cheap or virtually free copies of intangible information and creative works, and this knowledge production is often distributed at very small scales. For cultural commons, noted Philippe Aigrain, a French analyst of knowledge governance and CEO of Sopinspace, a maker for free software for collaboration and participatory democracy, “the key challenge is that average attention becomes scarcer in a world of abundant production.” This means that more attention must be paid on “mediating functions” – curating – and “revising our cultural expectations about ‘audiences’.”

It is helpful to see the historical roots of Internet-enabled knowledge commons, said Hilary Wainwright, the editor behind the UK political magazine Red Pepper and a research at the Transnational Institute. The Internet escalated the practice of sharing knowledge that began with the feminist movement’s recognition of a “plurality of sources.” It also facilitated the socialization of knowledge as a kind of collective action.

That these roots are not widely appreciated points to the limited vision of many knowledge commons, which tend to rely on a “deeply individualistic ethical ontology,” said Talha Syed, a professor of law at the University of California, Berkeley. This worldview usually leads commoners to focus on coercion – enclosures of knowledge commons – as the problem, he said. But “markets are problematic even if there is no monopoly,” he noted, because “we need to express both threats and positive aspirations in a substantive way. Freedom is more than people not coercing us.”

Shun-Ling Chen, a Taiwanese professor of law at the University of Arizona, noted that even free, mass-collaboration projects such as Wikipedia tend to fall back on western, individualistic conceptions of authorship and authority. This obscures the significance of traditional knowledge and history from the perspective of indigenous peoples, where less knowledge is recorded by “reliable sources.”

As the Stream recorded in its notes, knowledge commons are not just about individual freedoms, but about “marginalized people and social justice.” “The case for knowledge commons as necessary for social justice is an undeveloped theme,” the group concluded. But commons of traditional knowledge may require different sorts of legal strategies than those that are used to protect the collective knowledge embodied in free software or open access journal. The latter are both based on copyright law and its premises of individual rights, whereas traditional knowledge is not recognized as the sum of individual creations, but as a collective inheritance and resource.

This discussion raised the question whether provisioning knowledge through commons can produce different sorts of “products” as those produced by corporate enclosures, or whether they will simply create similar products with less inequality. Big budget movies and pharmaceuticals are often posited as impossibilities for commons provision (wrongly, by the way). But should these industries be seen as the ‘commanding heights’ of culture and medicine, or would a commons-based society create different commanding heights?”

One hint at an answer comes from seeing informality as a kind of knowledge commons. “Constructed commons” that rely upon copyright licenses (the GPL for software, Creative Commons licenses for other content) and upon policy reforms, are generally seen as the most significant, reputable knowledge commons. But just as many medieval commons relied upon informal community cooperation such as “beating the bounds” to defend themselves, so many contemporary knowledge commons are powerful because they are based on informal social practice and even illegality.

Alan Toner of Ireland noted that commoners who resist enclosures often “start from a position of illegality” (a point made by Ugo Mattei in his keynote talk). It may be better to frankly acknowledge this reality, he said. After all, remix culture would be impossible without civil disobedience to various copyright laws that prohibit copying, sharing and re-use – even if free culture people sometimes have a problem with such disrespectful or illegal resistance. “Piracy” is often a precursor to new social standards and even ne w legal rules. “What is legal is continent,” said Toner, because practices we spread now set traditions and norms for the future. We therefore must be conscious about the traditions we are creating. “The law is gray, so we must push new practices and organizations need to take greater risks,” eschewing the impulse to be “respectable” in order to become a “guiding star.”

Felix Stalder, a professor of digital culture at Zurich University of the Arts, agreed that civil disobedience and piracy are often precisely what is needed to create a “new normal,” which is what existing law is explicitly designed to prevent. “Piracy is building a de facto commons,” he added, “even if it is unaware of this fact. It is a laboratory of the new that can enrich our understanding of the commons.”

One way to secure the commons for the future, said Philippe Aigrain of Sopinspace, is to look at the specific challenges facing the commons rather than idealizing them or over-relying on existing precedents. As the Stream discussion notes concluded, “Given a new knowledge commons problem X, someone will state that we need a ‘copyleft for X.’ But is copyleft really effective at promoting and protecting the commons of software? What if we were to re-conceptualize copyleft as a prototype for effective, pro-commons regulation, rather than a hack on enclosure?”

Mike Linksvayer, the former chief technology officer of Creative Commons and the coordinator of the Knowledge Stream, noted that copyleft should be considered as “one way to “force sharing of information, i.e., of ensuring that knowledge is in the commons. But there may be more effective and more appropriate regulatory mechanisms that could be used and demanded to protect the commons.”

One provocative speculation was that there is a greater threat to the commons than enclosure – and that is obscurity. Perhaps new forms of promotion are needed to protect the commons from irrelevance. It may also be that excluding knowledge that doesn’t really contribute to a commons is a good way to protect a commons. For example, projects like Wikipedia and Debian mandate that only free knowledge and software be used within their spaces.


Addendum

Thanks to everyone who participated in the knowledge stream. All who prepared and delivered deep and critical provocations in the very brief time allotted:
Bodó Balázs
Shun-Ling Chen
Rick Falkvinge
Marco Fioretti
Charlotte Hess
Gaëlle Krikorian
Glyn Moody
Mayo Fuster Morrell
Prabir Purkayastha
Felix Stalder
Talha Syed
Wouter Tebbens
Alan Toner
Chris Watkins

Also thanks to Mayo Fuster Morrell and Petros for helping coordinate during the stream, and though neither could attend, Tal Niv and Leonhard Dobusch for helpful conversations about the stream and its goals. I enjoyed working with and learned much from the other stream coordinators: Saki Bailey (nature), Heike Löschmann (labor & care), Ludwig Schuster (money), and especially Miguel Said Vieira (infrastructure; early collaboration kept both infrastructure and knowledge streams relatively focused); and stream keynote speaker Carolina Botero; and conference organizers/Commons Strategy Group members: David Bollier, Michel Bauwens, and Silke Helfrich (watch their post-conference interview).

See the conference wiki for much more documentation on each of the streams, the overall conference, and related resources.

If a much more academic and apolitical approach is of interest, note the International Association for the Study of the Commons held its 2013 conference about 10 days after ECC. I believe there was not much overlap among attendees, one exception being Charlotte Hess (who also chaired a session on Governance of the Knowledge and Information Commons at the IASC conference).

ECC only strengthened my feeling (but, of course I designed the knowledge stream to confirm my biases…) that a much more bold, deep, inclusive (domains and methods of commoning, including informality, and populations), critical (including self-critical; a theme broached by several of the people thanked above), and competitive (product: displacing enclosure; policy: putting equality & freedom first) knowledge commons movement, or vanguard of those movements. Or as Carolina Botero put it in the stream keynote: bring the commons in through the front door. I promise to contribute to this project.

ECC also made me reflect much more on commons and commoning as a “core paradigm” for understanding and participating in the arrangements studied by social scientists. My thoughts are half baked at best, but that will not stop me from making pronouncements, time willing.

5 fantasy Internet Archive announcements

Thursday, October 24th, 2013

Speaking of public benefit spaces on the internet, tonight the Internet Archive is having its annual celebration and announcements event. It’s a top contender for the long-term most important site on the internet. The argument for it might begin with it having many copies at many points in time of many sites, mostly accessible to the public (Google, the NSA and others must have vast dark archives), but would not end there.

I think the Internet Archive is awesome. Brewster Kahle, its founder, is too. It is clear to me that he’s the most daring and innovative founder or leader in the bay area/non-profit/open/internet field and adjacencies. And he calls himself Digital Librarian. Hear, hear!

But, the Internet Archive could be even more awesome. Here’s what I humbly wish they would announce tonight:

  • A project to release all of the code that runs their websites and all other processes, under free/open source software licenses, and do their work in public repositories, issue trackers, etc. Such crucial infrastructure ought be open to public audit, and welcoming to public contribution. Obviously much of the code is ancient, crufty, and likely has security issues. No reason for embarrassment or obscurity. The code supporting the recording of this era of human communication is itself a dark archive. Danger! Fix it.
  • WikiNurture media collections. I believe media item metadata is now unversioned. It should be versioned. And the public should be able to enhance and correct metadata. Currently media in the Internet Archive is much less useful than it could be due to poor metadata (eg I expect music I download from the archive to not have good artist/album/title tags, making it a huge pain to integrate into my listenng habits, including to tell the world and make popular) and very limited relations among media items.
  • Aggressively support new free media formats, specifically Opus and WebM right now. This is an important issue for the free and open issue, and requires collective action. Internet Archive is in a key position, and should be exploit is strong position.
  • On top of existing infrastructure and much richer data, above, build Netflix-level experiences around the highest quality media in the archive, and perhaps all media with high quality metadata. This could be left to third parties, but centralization is powerful.
  • Finally, and perhaps the deadly combination of most contentious and least exciting: stop paying DRM vendors and publishers. Old posts on this: 1, 2, 3. Internet Archive is not in the position Mozilla apparently think they are, of tolerating DRM out of fear of losing relevance. Physical libraries may think they are in such a position, but only to the extent they think of themselves as book vendors, and lack vision. Please, show leadership to the digital libraries we want in the future, not grotesque compromises, Digital Librarian!

These enhancements would elevate Internet Archive to is proper status, and mean nobody could ever again justifiably say that ‘Aside from Wikipedia, there is no large, popular space being carved out for the public good.’

Addendum: The actual announcements were great, and mostly hinted at on the event announcement post. The Wayback Machine now can instantly archive any URL (“Save Page Now”). I expect to use that all the time, replacing webcitation.org. This post pre-addendum, including many spelling errors (written on the 38 Geary…). Javascript MESS and the software archive are tons of fun: “Imagine every computer that ever existed, in your browser.” No talk of DRM, but also no talk of books, unless I missed something.

Addendum 20131110: “What happened to the Library of Alexandria?” as a lead in to explaining why the Internet Archive has multiple data centers will take on new meaning from a few days ago, when there was a fire at its scanning center (no digital records were lost). Donate.

What’s *really* wrong with the free and open internet — and how we could win it

Thursday, October 24th, 2013

A few days ago Sue Gardner, ED of the Wikimedia Foundation, posted What’s *really* wrong with nonprofits — and how we can fix it. Judging by seeing the the link sent around, it has been read to confirm various conflicting biases different people in the SF bay area/internet/nonprofit space and adjacent already had. May I? Excerpt-based-summary:

A major structural flaw of many nonprofits is that their revenue is decoupled from mission work, which pushes them to focus on providing a positive donor experience often at the expense of doing their core work.

WMF makes about 95% of its money from the many-small-donors model
…
I spend practically zero time fundraising. We at the WMF get to focus on our core work of supporting and developing Wikipedia, and when donors talk with us we want to hear what they say, because they are Wikipedia readers
…
I think the usefulness of the many-small-donors model, ultimately, will extend far beyond the small number of nonprofits currently funded by it.
…
[Because Internet.]
…
For organizations that can cover their costs with the many-small-donors model I believe there’s the potential to heal the disconnect between fundraising and core mission work, in a way that supports nonprofits being, overall, much more effective.

I agree concerning extended potential. I thought (here comes confirmation of biases) that Creative Commons should make growing its small donor base its number one fundraising effort, with the goal of having small donors provide the majority of funding as soon as possible — realistically, after several years of hard work on that model. While nowhere close to that goal, I recall that about 2006-2009 individual giving grew rapidly, in numbers and diversity (started out almost exclusively US-based), even though it was never the number one fundraising priority. I don’t think many, perhaps zero, people other than me believed individual giving could become CC’s main source of support. Wikimedia’s success in that, already very evident, and its unique circumstance, was almost taken as proof that CC couldn’t. I thought instead Wikimedia’s methods should be taken as inspiration. The “model” had already been proven by nearby organizations without Wikimedia’s eyeballs; e.g., the Free Software Foundation.

An organization that wants to rely on small donors will have to work insanely hard at it. And, if it had been lucky enough to be in a network affording it access to large foundation grants, it needs to be prepared to shrink if the foundations tire of the organization before individual giving supplants them, and it may never fully do so. (But foundations might tire of the organization anyway, resulting in collapse without individual donors.) This should not be feared. If an organization has a clear vision and operating mission, increased focus on core work by a leaner team, less distracted by fundraising, ought be more effective than a larger, distracted team.

But most organizations don’t have a clear vision and operating mission (I don’t mean words found in vision and mission statements; rather the shared and deep knowing-what-we’re-trying-to-do-and-how that allows all to work effectively, from governance to program delivery). This makes any coherent strategic change more difficult, including transitioning to small donor support. It also gives me pause concerning some of the bits of Gardner’s post that I didn’t excerpt above. For most organizations I’d bet that real implementation of nonprofit “best practices” regarding compliance, governance, management, reporting, etc, though boring and conservative, would be a big step up. Even trying to increase the much-maligned program/(admin+fundraising) ratio is probably still a good general rule. I’d like to hear better ones. Perhaps near realtime reporting of much more data than can be gleaned from the likes of a Form 990 will help “big data scientists” find better rules.

It also has to be said that online small donor fundraising can be just as distracting and warping (causing organization to focus on appearing appealing to donors) as other models. We (collectively) have a lot of work to do on practices, institutions, and intermediaries that will make the extended potential of small donor support possible (read Gardner’s post for the part I lazily summarized as [Because Internet.]) in order for the outcome to be good. What passes as savvy advice on such fundraising (usually centered around “social media”) has for years been appalling and unrealistic. And crowdfunding has thus far been disappointing in some ways as an method of coordinating public benefit.

About 7 months ago Gardner announced she would be stepping down as ED after finding a replacement (still in progress), because:

I’ve always aimed to make the biggest contribution I can to the general public good. Today, this is pulling me towards a new and different role, one very much aligned with Wikimedia values and informed by my experiences here, and with the purpose of amplifying the voices of people advocating for the free and open internet. I don’t know exactly what this will look like — I might write a book, or start a non-profit, or work in partnership with something that already exists.

My immediate reaction to this was exactly what Виктория wrote in reply to the announcement:

I cannot help but wonder what other position can be better for fighting consumerisation, walling-in and freedom curtailment of the Internet than the position of executive director of the Wikimedia Foundation.

I could take this as confirming another of my beliefs: that the Wikimedia movement (and other constructive free/open movements and organizations) do not realize their potential political potency — for changing the policy narrative and environment, not only taking rear guard actions against the likes of SOPA. Of course then, the Wikimedia ED wouldn’t think Wikimedia the most effective place from which to work for a free and open internet. But, my beliefs are not widely held, and likely incorrect. So I was and am mostly intrigued, and eager to see what Gardner does next.

After reading the What’s *really* wrong with nonprofits post above, I noticed that 4 months ago Gardner had posted The war for the free and open internet — and how we are losing it, which I eagerly read:

[non-profit] Wikipedia is pretty much alone. It’s NOT the general rule: it’s the exception that proves the rule.
…
The internet is evolving into a private-sector space that is primarily accountable to corporate shareholders rather than citizens. It’s constantly trying to sell you stuff. It does whatever it wants with your personal information. And as it begins to be regulated or to regulate itself, it often happens in a clumsy and harmful way, hurting the internet’s ability to function for the benefit of the public. That for example was the story of SOPA.
…
[Stories of how Wikipedia can fight censorship because it is both non-profit and very popular]
…
Aside from Wikipedia, there is no large, popular space being carved out for the public good. There are a billion tiny experiments, some of them great. But we should be honest: we are not gaining ground.
…
The internet needs serious help if it is to remain free and open, a powerful contributor to the public good.

Final exercise in confirming my biases (this post): yes, what the internet needs is more spaces carved our for the public good — more Wikipedias — categories other than encyclopedia in which a commons-based product out-competes proprietary incumbents, increasing equality and freedom powerfully in both the short and long (capitalization aligned with rent seeking demolished) term. Wikipedia is unique in being wildly successful and first and foremost a website, but not alone (free software collectively must many times more liberating by any metric, some of it very high profile, eg Firefox; Open Access is making tremendous progress, and I believe PLOS may have one of the strongest claims to operating not just to make something free, but to compete directly with and eventually displace incumbents).

A free and open internet, and society, needs intense competition from commons-based initiatives in many more categories, including those considered the commanding heights of culture and commerce, eg premium video, advertising, social networking, and many others. Competition does not mean just building stuff, but making it culturally relevant, meaning making it massively popular (which Wikipedia lucked into, being the world’s greatest keyword search goldmine). Nor does it necessarily mean recapitulating proprietary products exactly, eg some product expectations might moved to ones more favorable to mass collaboration.

Perhaps Gardner’s next venture will aim to carve out a new, popular space for the public good on the internet. Perhaps it will be to incubate other projects with exactly that aim (there are many experiments, as her post notes, but not many with “take overliberate the world” vision or resources; meanwhile there is a massive ecosystem churning out and funding attempts to take over the world new proprietary products). Perhaps it will be to build something which helps non-profits leverage the extended potential of the small donor model, in a way that maximizes public good. Most likely, something not designed to confirm my biases. ☺ But, many others should do just that!

Wikipedia’s economic values

Tuesday, October 8th, 2013

Jonathan Band and Jonathan Gerafi have written a survey of papers estimating Wikipedia’s Economic Value (pdf), where Wikipedia is all Wikipedia language editions, about 22 million articles total. I extracted the ranges of estimates of various types in a summary.

Valuation if Wikipedia were for-profit:

  • $10b-$30b based on valuation of sites with similar visitor and in-link popularity
  • $21.1b-$340b based on revenue if visitors had to pay, akin to Britannica
  • $8.8b-$86b based on potential revenue if Wikipedia ran ads

One-time replacement cost:

  • $6.6b-$10.25b based on freelance writer rates

Ongoing maintenance cost:

  • $630m/year based on hiring writers

Annual consumer surplus

  • $16.9b-$80b based on potential revenue if visitors had to pay
  • $54b-$720b based on library estimates of value of answering reference inquiries

Conclusion: “Wikipedia demonstrates that highly valuable content can be created by non-professionals not incentivized by the copyright system.”

Though obvious and underwhelming, it’s great to see that conclusion stated. Wikipedia and similar are not merely treasures threatened by even more bad policy, but at the very least evidence for other policy, and shapers of the policy conversation and environment.

They don’t achieve this simply through the creation of great content. To fully appreciate the concept of “highly valuable” here, consider that Wikipedia is also immensely popular—a prime example of peer-produced, free cultural relevance. Platforms like Wikipedia succeed not only by generating good content but by fostering a collaborative environment that challenges products dependent on or perpetuating flawed policies. For those interested in understanding the broader impacts and methodologies behind such platforms, mehr Infos hier can offer insights into the power of community-driven projects and their role in reshaping digital culture.

Much about the ranges above, the estimates they include, and their pertinence to the “economic value of Wikipedia”, is highly speculative. Even more speculative, difficult, and interesting would be estimates of the value due to Wikipedia being a commons. The winning online encyclopedia probably would’ve been a very popular site, even if it had been proprietary, rather than Wikipedia or other somewhat open contenders. Consider that Encarta, not Wikipedia, mostly killed Britannica, and that people are very willing to contribute freely to proprietary products.

A broader (than just Wikipedia) take on this harder question was at the core of a research program on the welfare impact of Creative Commons that was in very early stages, and sadly ended coincident with lots of people leaving (including me).

How do we characterize the value (take your pick of value value) of knowledge systems that promote freedom and equality relative to those that promote enclosure? I hope many pick up that challenge, and activists use the results offensively (pdf, slideshare).

Speedy Firefox video

Monday, July 1st, 2013

Firefox 22, as of last week the general release which the vast majority of Firefox users will auto-upgrade to, includes the “change HTML5 audio/video playback rate” feature that I submitted a feature request for a few months ago. Yay!

It’s a fairly obscure feature (right-click on HTML5 audio/video, if site hasn’t evil-y overwritten default user actions) but hopefully knowledge of it will spread and millions of users will save a huge amount of time listening to lectures and the like, and also come to expect this degree of control over their experience of media on the web.

The next feature request that I really want Firefox developers to address rapidly is Implement VP9 video decoder in Firefox. The next generation of the WebM royalty-free video format uses the VP9 and Opus video and audio codecs, each a large improvement over the currently used VP8 and Vorbis codecs. (For the possible next-next generation free/open video codec, see Daala.)

To date the free/open world has fared very poorly in getting adoption of free/open formats (audio/video as well as document formats), even when they’re clearly technically superior to the encumbered competition (eg Vorbis vs MP3).

(Credit where due, the availability of competitive free/open formats and whatever adoption they’ve gained has probably had large unseen positive effects on consumer welfare by restraining the pricing power of patent monopolists. Similarly the “Linux desktop” has probably invisibly but very significantly increased consumer welfare. I’d love to see an academic analysis.)

If free/open formats are important, all concerned ought to take a close look at why we have failed thus far, and how we can increase our chances going forward. Yes, adoption is hard, network effects of existing formats a very powerful, and commercial relationships needed to gain massive default adoption are hard to break into. The last is one reason we need more billion dollar open source organizations.

But I think we’ve done a poor job of coordinating the free/open/nearby entities that are already large (in terms of presence, if not dollars) to push for adoption of free/open formats, especially at the critical juncture of the release of a new format, during which time there’s some excitement, and also a period of technical superiority (for audio/video anyway, each generation leapfrogs previous capabilities).

The obvious entities I have in mind in addition to Mozilla are Wikimedia sites and the Internet Archive. It took over 2 years for Wikimedia Commons to support WebM uploads (maybe the first) and though the Internet Archive accepts WebM uploads, it still transcodes to the far older Theora format, and for audio doesn’t support Opus at all.

Granted none of these entities have supporting free/open formats as their top priority, and supporting a new format is work, and these are just the highest profile relevant entities in the free/open/nearby world. Can we overcome this collective action problem for the benefit of all?

Very tangentially related, I just noticed the first failure in my video hosting longevity experiment. Goblin.se seems to have moved to serving files from a CDN, without setting up redirects such that old embeds still work.

List of Wikimania 2013 Submissions of Interest

Saturday, May 4th, 2013

Unlikely I’ll attend Wikimania 2013 in Hong Kong (I did last year in DC). In lieu of marking myself as an interested attendee of proposed sessions, my list of 32 particularly interesting-to-me proposals follows. I chose by opening the proposal page for each of the 331 submissions that looked interesting at first glance (about 50) and weeded out some of those.

I suspect many of these proposals might be interesting reading for anyone generally curious about possible futures of Wikipedia and related, similar, and complementary projects, but not following any of these things closely.

Products that embody openness the most powerful way to shape the policy conversation

Wednesday, May 1st, 2013

Aza Raskin writing about Mozilla:

Developing products that embody openness is the most powerful way to shape the policy conversation. Back those products with hundreds of millions of users and you have a game-changing social movement.

I completely agree, at least when “product” and “policy” are construed broadly — both include, e.g., marketing and adoption/use/joining of products, communities, ethics, ideas, etc. Raskin’s phrasing also (understandably, as he’s working for Mozilla) emphasizes central organizations as the actor (which backs products with users, rather than users adopting the product, and participating in its development) more than I’d like, but that’s nuance.

This is why I complain about rearguard clicktivism against bad policy that totally fails to leverage the communication opportunity to also promote good policy and especially products that embody good policy, and even campaigns for good policy concepts that fail to also promote products which embody the promoted policy.

To summarize, there’s product competition and policy competition, and I think the former is hugely undersold as potently changing the latter. (There’s also beating-of-the-bounds, perhaps with filesharing and wikileaks as examples, which has product and policy competition aspects, but seems a distinct kind of action; which ought to be brought into closer conversation with the formal sector.)

The main point of Raskin’s post is that Mozilla is a second-mover, taking proven product segments and developing products for them which embody openness, and that it could do that in more segments, various web applications in particular. I look forward to more Mozilla services.

A lot of what Wikipedia and Public Library of Science have done very successfully could also be considered “second mover”, injecting freedom into existing categories — sometimes leading to exploding the a category with something qualitatively and quantitatively huger.

I admit that the phrase I pulled from Raskin’s post merely confirms (and this by authority!) a strongly held bias of mine. How to test? Failing that, what are the best arguments against?

Why DRM in HTML5 and what to do about it

Tuesday, April 23rd, 2013

Kẏra writes Don’t let the myths fool you: the W3C’s plan for DRM in HTML5 is a betrayal to all Web users.

Agreed, but what to do about it?

In the short term, the solution is to convince W3C that moving forward will be an embarrassing disaster, nevermind what some of its for-profit members want. This has been accomplished before, in particular 2001 when many wanted W3C to have a RAND (allowing so-called Reasonable And Non-Discriminatory fees to be required for implementing a standard) patent policy, but they were embarrassed into finally doing the right thing, mandating RF (Royalty Free) patent licensing by participants in W3C standards.

One small way to help convince the W3C is to follow Kẏra’s recommendation to sign the Free Software Foundation’s No DRM in HTML5 petition.

Long term, the only way the DRM threat is going to be put to rest is for free cultural works to become culturally relevant, if not dominant (the only unambiguous example of such as yet is Wikipedia exploding the category known as “encyclopedia”). One of Kẏra’s points is “The Web doesn’t need big media; big media needs the Web.” True, but individual web companies do fear big media and hope for an advantage over competitors by doing deals with big media, including deals selling out The Web writ large (that’s the “Why” in this post’s title).

To put it another way, agitation for “Hollyweb” will continue until Hollywood is no longer viewed as the peak of culture. I don’t mean just, and perhaps not even, “Hollywood movies”, but also the economic, ethical, social and other assumptions that lead us to demand delivery of more pyramids over protecting and promoting freedom and equality.

I don’t have a petition to recommend signing in order to help increase the relevance and dominance and hence unleash the liberation potential of knowledge commons. Every bit of using, recommending, building, advocating for as policy, and shifting the conversation toward intellectual freedom helps.

Waiting out DRM (and intellectual protectionism in general) is not a winning strategy. There is no deterministic path for other media to follow music away from DRM, and indeed there is a threat that a faux-standard as proposed will mean that DRM becomes the expectation and demand of/by record companies, again. In general bad policy abets bad policy and monopoly abets monopoly. The reverse of each is also true. If you aren’t helping make freedom real and real popular, you hate freedom!☻

Future of culture & IP & beating of books in San Jose, Thursday

Tuesday, November 13th, 2012

I’m looking forward to this “in conversation” event with artist Stephanie Syjuco. The ZERO1 Garage is a neat space, and Syjuco’s installation, FREE TEXTS: An Open Source Reading Room, is just right.

For background on my part of the conversation, perhaps read my bit on the future of copyright and my interview with Lewis Hyde, author of at least one of the treated FREE TEXTS (in the title of this post “beating of books” is a play on “beating of bounds”; see the interview, one of my favorite posts ever to the Creative Commons blog).

One of the things that makes FREE TEXTS just right is that “IP” makes for a cornucopia of irony (Irony Ponies for all?), and one of the specialty fruits therein is literature extolling the commons and free expression and problematizing copyright … subject to unmitigated copyright and expensive in time and/or money to access, let alone modify.

Even when a text is in-theory offered under a public license, thus mitigating copyright (but note, it is rare for any such mitigation to be offered), access to a digital copy is often frustrated, and access to a conveniently modified copy, almost unknown. The probability of these problems occurring reaches near certainty if a remotely traditional publisher is involved.

Two recent examples that I’m second-hand familiar with (I made small contributions). All chapters of Wealth of the Commons (Levellers Press, 2012) with the exception of one are released under the CC-BY-SA license. But only a paper version of the book is now available. I understand that digital copies (presumably for sale and gratis) will be available sometime next year. Some chapters are now available as HTML pages, including mine. The German version of the book (Transcript, 2012), published earlier this year with a slightly different selection of essays, is all CC-BY-SA and available in whole as a PDF, and some chapters as HTML pages, again including mine (but if one were to nitpick, the accompanying photo under CC-BY-NC-SA is incongruous).

The Social Media Reader (New York University Press, 2012) consists mostly of chapters under free licenses (CC-BY and CC-BY-SA) and a couple under CC-BY-NC-SA, with the collection under the last. Apparently it is selling well for such a book, but digital copies are only available with select university affiliation. Fortunately someone uploaded a PDF copy to the Internet Archive, as the licenses permit.

In the long run, these can be just annoyances and make-work, at least to the extent the books consist of material under free licenses. Free-as-in-freedom does not have to mean free-as-in-price. Even without any copyright mitigation, it’s common for digital books to be made available in various places, as FREE TEXTS highlights. Under free licenses, it becomes feasible for people to openly collaborate to make improved, modifiable, annotatable versions available in various formats. This is currently done for select books at Wikibooks (educational, neutral point of view, not original research) and Wikisource (historically significant). I don’t know of a community for this sort of work on other classes of books, but I’d be happy to hear of such, and may eventually have to start doing it if not. Obvious candidate platforms include Mediawiki, Booktype, and source-repository-per-book.

You can register for the event (gratis) in order to help determine seating and refreshments. I expect the conversation to be considerably more wide ranging than the above might imply!