Post Peeves

Audio/video player controls should facilitate changing playback rate

Saturday, March 9th, 2013

Listening or viewing non-fiction/non-art (eg lectures, presentations) at realtime speed is tiresome. I’ve long used rbpitch (but more control than I need or want) or VLC’s built-in playback speed menu (but mildly annoyed by “Faster” and “Faster (fine)”; would prefer to see exact rate) and am grateful that most videos on YouTube now feature a playback UI that allows playback at 1.5x or 2x speed. The UI I like the best so far is Coursera’s, which very prominently facilitates switching to 1.5x or 2x speed as well as up and down by 0.25x increments, and saving a per-course playback rate preference.

HTML5 audio and video unadorned with a customized UI (latter is what I’m seeing at YouTube and Coursera) is not everywhere, but it’s becoming more common, and probably will continue to as adding video or audio content to a page is now as easy as adding a non-moving image, at least if default playback UI in browsers is featureful. I hope for this outcome, as hosting site customizations often obscure functionality, eg by taking over the context menu (could browsers provide a way for users to always obtain the default context menu on demand?).

Last month I submitted a feature request for Firefox to support changing playback speed in the default UI, and I’m really happy with the response. The feature is now available in nightly builds (which are non-scary; I’ve run nothing else for a long time; they just auto-update approximately daily, include all the latest improvements, and in my experience are as stable as releases, which these days means very stable) and should be available in a general release in approximately 18-24 weeks. You can test the feature on the page the screenshot above is from; note it will work on some of the videos, but for others the host has hijacked the context menu. Or something that really benefits from 2x speed (which is not at all ludicrous; it’s my normal speed for lectures and presentations that I’m paying close attention to).

Even better, the request was almost immediately triaged as a “[good first bug]” and assigned a mentor (Jared Wein) who provided some strong hints as to what would need to be done, so strong that I was motivated to set up a Firefox development environment (mostly well documented and easy; the only problem I had was figuring out which of the various test harnesses available to test Firefox in various ways was the right one to run my tests) and get an unpolished version of the feature working for myself. I stopped when darkowlzz indicated interest, and it was fun to watch darkolzz, Jared, and a couple others interact over the next few weeks to develop a production-ready version of the feature. Thank you Jared and darkowlzz! (While looking for links for each, I noticed Jared posted about the new feature, check that out!)

Kodus also to Mozilla for having a solid easy bug and mentoring process in place. I doubt I’ll ever contribute anything non-trivial, but the next time I get around to making a simple feature request, I’ll be much more likely to think about attempting a solution myself. It’s fairly common now for projects have at least tag easy bugs; OpenHatch aggregates many of those. I’m not sure how common mentored bugs are.

I also lucked out in that support for setting playback rate from javascript had recently been implemented in Firefox. Also see documentation for the javascript API for HTML5 media elements and what browser versions implement each.

Back to playback rate, I’d really like to see anything that provides an interface to playing timed media to facilitate changing playback rate. Anything else is a huge waste of users’ time and attention. A user preference for playback rate (which might be as simple as always using the last rate, or as complicated as a user-specified calculation based on source and other metadata) would be a nice bonus.

OA mandate, FLOSS contrast

Friday, February 22nd, 2013

The Obama administration:

has directed Federal agencies with more than $100M in R&D expenditures to develop plans to make the published results of federally funded research freely available to the public within one year of publication and requiring researchers to better account for and manage the digital data resulting from federally funded scientific research

A similar policy has been in place for NIH funded research for several years, and more are in the works around the world.

Peter Suber, as far as I can tell the preeminent chronicler of the Open Access (OA) movement, and one of its primary activists, seems to have the go-to summary post.

Congratulations and thanks to all OA activists. I want to take this particular milestone in order to make some exaggerated contrasts between OA and free/libre/open source software (FLOSS). I won’t bother with cultural, educational, and other variants, but assume they’re somewhere between and lagging overall.

  • OA is far more focused on end products (papers), FLOSS on modifiable forms (source)
  • OA is far more focused on gratis access (available on-line at no cost), FLOSS on removing legal restrictions (via public licenses)
  • OA has a fairly broad conception of info governance, FLOSS focused on class of public licenses, selection within that class
  • OA is far more focused on public and institutional policy (eg mandates like today’s), FLOSS on individual developer and user choices
  • OA is more focused on global ethics (eg access to knowledge in poor regions), FLOSS on individual developer and user ethics

If you’ve followed either movement you can think of exceptions. I suspect the above generalizations are correct as such, but tell me I’m wrong.

Career arrangements are an obvious motivator of some of these differences: science more institutional and tracked, less varied relative to programming. Thus where acting on individual ethics alone with regard to publishing is often characterized as suicidal for a scientist, it is welcome, but not extraordinary nor a cause for concern for a programmer. At the same time, FLOSS people might overestimate the effectiveness of individual choices, merely because they are relatively easy to make and expressive.

One can imagine a universe in which facts are different enough that the characteristics of movements for something like open research and software are reversed, eg no giant institutions and centralized funding, but radical individual ethics for science, dominance of amazing mainframes and push for software escrow for programming. Maybe our universe isn’t that bad, eh?

I do not claim one approach is superior to the other. Indeed I think there’s plenty each can learn from the other. Tip-of-the-iceberg examples: I appreciate those making FLOSS-like demands of OA, think those working on government and institutional policy in FLOSS should be appreciated much more, and the global ethical dimension of FLOSS, in particular with regard to A2K-like equality implications, badly needs to be articulated.

Beyond much needed learning and copying of strategies, some of those involved in OA and FLOSS (and that in between and lagging) might better appreciate each others’ objectives, their commonalities, and actively collaborate. All ignore computational dominance of everything at their peril, and software people self-limit, self-marginalize, even self-refute by limiting their ethics and action to software.

“Commoning the noosphere” sounds anachronistic, but is yet to be, and I suspect involves much more than a superset of OA and FLOSS strategy and critique.

Open Knowledge Foundation

Wednesday, February 13th, 2013

I used to privately poke fun at the Open Knowledge Foundation for what seemed like a never-ending stream of half-baked projects (and domains, websites, lists, etc). I was wrong.

(I have also criticized OKF’s creation of a database-specific copyleft license, but recognize its existence is mostly Creative Commons’ fault, just as I criticize some of Creative Commons’ licenses but recognize that their existence is mostly due to a lack of vision on the part of free software activists.)

Some of those projects have become truly impressive (e.g. the Public Domain Review and CKAN, the latter being a “data portal” deployed by numerous governments in direct competition with proprietary “solutions”; hopefully my local government will eventually adopt the instance OpenOakland has set up). Some projects once deemed important seem relatively stagnant, but were way ahead of their time, if only because the non-software free/open universe painfully lags software (e.g. KnowledgeForge). I haven’t kept track of most OKF projects, but whichever ones haven’t succeeded wildly don’t seem to have caused overall problems.

Also, in the past couple years, OKF has sprouted local groups around the world.

Why has the OKF succeeded, despite what seemed to me for a time chaotic behavior?

  • It knows what it is doing. Not necessarily in terms of having a solid plan for every project it starts, but in the more fundamental sense of knowing what it is trying to accomplish, grounded by its own definition of what open knowledge is (unsurprisingly it is derived from the Open Source Definition). I’ve been on the advisory council for that definition for most of its existence, and this year I’m its chair. I wrote a post for the OKF blog today reiterating the foundational nature of the definition and its importance to the success of OKF and the many “open” movements in various fields.
  • It has been a lean organization, structured to be able to easily expand and contract in terms of paid workers, allowing it to pursue on-mission projects rather than be dominated by permanent institutional fundraising.
  • It seems to have mostly brought already committed open activists/doers into the organization and its projects.
  • The network (eg local groups) seems to have grown fairly organically, rather than from a top-down vision to create an umbrella that all would attach themselves toview with great skepticism.

OKF is far from perfect (in particular I think it is too detached from free/open source software, to the detriment of open data and reducing my confidence it will continue to say on a fully Open course — through action and recruitment — one of their more ironic practices at this moment is the Google map at the top of their local groups page [Update: already fixed, see comments]). But it is an excellent organization, at this point probably the single best connection to all things Open, irrespective of field or geography.

Check them out online, join or start a local group, and if you’re interested in the minutiae of of whether particular licenses for intended-to-be-open culture/data/education/government/research works are actually open, help me out with OKF’s OpenDefinition.org project.

Public Domains Day

Tuesday, January 1st, 2013

Points 1-4 of my year-ago post, Which counterfactual public domain day? hold up well, but number 5 could be improved: it concerns optimal copyright term, which is a rather narrow issue, and viewed from an unhealthy side.

Instead, consider that in common language, and presumably to most people, “in the public domain” means something like “revealed to the public” or “not secret”, as the first definition currently presented by Google reflects:

pub·lic do·main
noun
public domains, plural

  1. The state of belonging or being available to the public as a whole
  2. Not subject to copyright

    the photograph had been in the public domain for 15 years
    public-domain software

  3. Public land
    a grazing permit on public domain

It’s not clear how Google’s computers selected those definitions, but they did a good job: “intellectual property” focused definitions seem to have largely crowded out the common usage in written down definitions.

The common “available to the public as a whole” understanding reflects why I have been more recently careful to stress that copyright policy is a small part of information policy and that reducing copyright restrictions (anti-sharing regulation), all the way to abolition, are in this broader context moderate reforms — more thoroughgoing reform would have to consider pro-sharing regulation (as I’ve said many times, broadly construed; choose the mechanisms that fit your ideological commitments) — requiring information revelation, eg of computer program source code.

People curating and promoting works not subject to copyrestriction, information preservationists, leakers, transparency activists, and many others provide various sorts of pro-public-domain regulation. But I especially want to recognize enforcers of copyleft regulation as benefiting (though problematically) the commonly understood public domain, and in the most important field (computation is suffusing everything, security through obscurity isn’t, etc).

Happy Public Domains Day. I offer a cornucopia of vague jokes, indeed.

Future of culture & IP & beating of books in San Jose, Thursday

Tuesday, November 13th, 2012

I’m looking forward to this “in conversation” event with artist Stephanie Syjuco. The ZERO1 Garage is a neat space, and Syjuco’s installation, FREE TEXTS: An Open Source Reading Room, is just right.

For background on my part of the conversation, perhaps read my bit on the future of copyright and my interview with Lewis Hyde, author of at least one of the treated FREE TEXTS (in the title of this post “beating of books” is a play on “beating of bounds”; see the interview, one of my favorite posts ever to the Creative Commons blog).

One of the things that makes FREE TEXTS just right is that “IP” makes for a cornucopia of irony (Irony Ponies for all?), and one of the specialty fruits therein is literature extolling the commons and free expression and problematizing copyright … subject to unmitigated copyright and expensive in time and/or money to access, let alone modify.

Even when a text is in-theory offered under a public license, thus mitigating copyright (but note, it is rare for any such mitigation to be offered), access to a digital copy is often frustrated, and access to a conveniently modified copy, almost unknown. The probability of these problems occurring reaches near certainty if a remotely traditional publisher is involved.

Two recent examples that I’m second-hand familiar with (I made small contributions). All chapters of Wealth of the Commons (Levellers Press, 2012) with the exception of one are released under the CC-BY-SA license. But only a paper version of the book is now available. I understand that digital copies (presumably for sale and gratis) will be available sometime next year. Some chapters are now available as HTML pages, including mine. The German version of the book (Transcript, 2012), published earlier this year with a slightly different selection of essays, is all CC-BY-SA and available in whole as a PDF, and some chapters as HTML pages, again including mine (but if one were to nitpick, the accompanying photo under CC-BY-NC-SA is incongruous).

The Social Media Reader (New York University Press, 2012) consists mostly of chapters under free licenses (CC-BY and CC-BY-SA) and a couple under CC-BY-NC-SA, with the collection under the last. Apparently it is selling well for such a book, but digital copies are only available with select university affiliation. Fortunately someone uploaded a PDF copy to the Internet Archive, as the licenses permit.

In the long run, these can be just annoyances and make-work, at least to the extent the books consist of material under free licenses. Free-as-in-freedom does not have to mean free-as-in-price. Even without any copyright mitigation, it’s common for digital books to be made available in various places, as FREE TEXTS highlights. Under free licenses, it becomes feasible for people to openly collaborate to make improved, modifiable, annotatable versions available in various formats. This is currently done for select books at Wikibooks (educational, neutral point of view, not original research) and Wikisource (historically significant). I don’t know of a community for this sort of work on other classes of books, but I’d be happy to hear of such, and may eventually have to start doing it if not. Obvious candidate platforms include Mediawiki, Booktype, and source-repository-per-book.

You can register for the event (gratis) in order to help determine seating and refreshments. I expect the conversation to be considerably more wide ranging than the above might imply!

CODATA

Saturday, November 10th, 2012

Last week I attended CODATA 2012 in Taipei, the biannual conference of the Committee on Data for Science and Technology. I struggled a bit with deciding to go — I am not a “data scientist” nor a scientist and while I know a fair amount about some of the technical and policy issues for data management, specific application to science has never been my expertise, all away from my current focus, and I’m skeptical of travel.

I finally went in order to see through a session on mass collaboration data projects and policies that I developed with Tyng-Ruey Chuang and Shun-Ling Chen. A mere rationalization as they didn’t really need my presence, but I enjoyed the conference and trip anyway.

My favorite moments from the panel:

  • Mikel Maron said approximately “not only don’t silo your data, don’t silo your code” (see a corresponding bullet in his slides), a point woefully and consistently underestimated and ignored by “open” advocates.
  • Chen’s eloquent polemic closing with approximately “mass collaboration challenges not only Ⓒ but distribution of power, authority, credibility”; I hope she publishes her talk content!

My slides from the panel (odp, pdf, slideshare) and from an open data workshop following the conference (odp, pdf, slideshare).

Tracey Lauriault summarized the mass collaboration panel (all of it, check out the parts I do not mention), including:

Mike Linksvayer, was provocative in stating that copyright makes us stupider and is stupid and that it should be abolished all together. I argued that for traditional knowledge where people are seriously marginalized and where TK is exploited, copyright might be the only way to protect themselves.

I’m pretty sure I only claimed that including copyright in one’s thinking about any topic, e.g., data policy, effectively makes one’s thinking about that topic more muddled and indeed stupid. I’ve posted about this before but consider a post enumerating the ways copyright makes people stupid individually and collectively forthcoming.

I didn’t say anything about abolishing copyright, but I’m happy for that conclusion to be drawn — I’d be even happier for the conclusion to be drawn that abolition is a moderate reform and boring (in no-brainer and non-interesting senses) among the possibilities for information and innovation policies — indeed, copyright has made society stupid about these broader issues. I sort of make these points in my future of copyright piece that Lauriault linked to, but will eventually address them directly.

Also, Traditional Knowledge, about which I’ve never posted unless you count my claim that malgovernance of the information commons is ancient, for example cult secrets (mentioned in first paragraph of previous link), though I didn’t have contemporary indigenous peoples in mind, and TK covers a wide range of issues. Indeed, my instinct is to divide these between issues where traditional communities are being excluded from their heritage (e.g., plant patents, institutionally-held data and items, perhaps copyrestricted cultural works building on traditional culture) and where they would like to have a collective right to exclude information from the global public domain.

The theme of CODATA 2012 was “Open Data and Information for a Changing Planet” and the closing plenary appropriately aimed to place the entire conference in that context, and question its impact and followup. That included the inevitable asking whether anyone would notice. At the beginning of the conference attendees were excitedly encouraged to tweet, and if I understood correctly, there were some conference staff to be dedicated to helping people tweet. As usual, I find this sort of exhortation and dedication of resources to social media scary. But what about journalists? How can we make the media care?

Fortunately for (future) CODATA and other science and data related events, there’s a great answer (usually there isn’t one), but one I didn’t hear mentioned at all outside of my own presentation: invite data journalists. They could learn a lot from other attendees, have a meta story about exactly the topic they’re passionate about, and an inside track on general interest data-driven stories developing from data-driven science in a variety of fields — for example the conference featured a number of sessions on disaster data. Usual CODATA science and policy attendees would probably also learn a lot about how to make their work interesting for data journalists, and thus be able to celebrate rather than whinge when talking about media. A start on that learning, and maybe ideas for people to invite might come from The Data Journalism Handbook (disclaimer: I contributed what I hope is the least relevant chapter in the whole book).

Someone asked how to move forward and David Carlson gave some conceptually simple and very good advice, paraphrased:

  • Adopt an open access data publishing policy at the inception of a project.
  • Invest in data staff — human resources are the limiting factor.
  • Start publishing and doing small experiments with data very early in a project’s life.

Someone also asked about “citizen science”, to which Carlson also had a good answer (added to by Jane Hunter and perhaps others), in sum roughly:

  • Community monitoring (data collection) may be a more accurate name for part of what people call citizen science;
  • but the community should be involved in many more aspects of some projects, up to governance;
  • don’t assume “citizen scientists” are non-scientists: often they’ll have scientific training, sometimes full-time scientists contributing to projects outside of work.

To bring this full circle (and very much aligned with the conference’s theme and Carlson’s first recommendation above) would have been consideration of scientist-as-citizen. Fortunately I had serendipitously titled my “open data workshop” presentation for the next day “Open data policy for scientists as citizens and for citizen science”.

Finally, “data citation” was another major topic of the conference, but semantic web/linked open data not explicitly mentioned much, as observed by someone in the plenary. I tend to agree, but may have missed the most relevant sessions, though they may have been my focus if I was actually working in the field. I did really enjoy happening to sit next to Curt Tilmes at a dinner, and catching up a bit on W3C Provenance (I’ve mentioned briefly before) of which he is a working group member.

I got to spend a little time outside the conference. I’d been to Taipei once before, but failed to notice its beautiful setting — surrounded and interspersed with steep and very green hills.

I visited National Palace Museum with Puneet Kishor. I know next to nothing about feng shui, but I was struck by what seemed to be an ultra-favorable setting (and made me think of feng shui, which I never have before in my life, without someone else bringing it up) taking advantage of some of the aforementioned hills. I think the more one knows about Chinese history the more one would get out of the museum, but for someone who loves maps, the map room alone is worth the visit.

It was also fun hanging out a bit with Christopher Adams and Sophie Chiang, catching up with Bob Chao and seeing the booming Mozilla Taiwan offices, and meeting Florence Ko, Lucien Lin, and Rock of Open Source Software Foundry and Emily from Creative Commons Taiwan.

Finally, thanks to Tyng-Ruey Chuang, one of the main CODATA 2012 local organizers, and instigator of our session and workshop. He is one of the people I most enjoyed working with while at Creative Commons (e.g., a panel from last year) and given some overlapping technology and policy interests, one of the people I could most see working with again.

Video hosting longevity experiment

Friday, October 12th, 2012

Some friends have been working on a to-be-federated media hosting web application called MediaGoblin. They’re running a crowdfunding campaign to pay for more developer time. Please join me in feeding the developers.

For irony and goading (noted and fixed before I could get this post up), an appreciation (but not implementation) of POSSE (probably complementary to federation, but a different take on not entirely the same problem), and a test of hosting (which includes identifiers) permanence, I uploaded their campaign video various places. I’ve ordered below by my guess at longevity, from high to low (* next to those I did not upload).


Internet Archive.


YouTube.

BitTorrent magnet link, pirate bay listing.


Commented out FSF static hosting found in source of the campaign page.*


MediaGoblin instance at goblin.se.*


My MediaGoblin instance.


CDN hosted files found in source of the campaign page.*

Open Data nuance

Sunday, October 7th, 2012

I’m very mildly annoyed with some discussion of “open data”, in part where it is an amorphous thing for which expectations must be managed, value found and sustainable business models, perhaps marketplaces, invented, all with an abstract and tangential relationship to software, or “IT”.

All of this was evident at a recent Open Knowledge Foundation meetup at the Wikimedia Foundation offices — but perhaps only evident to me, and I do not really intend to criticize anyone there. Their projects are all great. Nonetheless, I think very general discussion about open data tends to be very suboptimal, even among experts. Perhaps this just means general discussion is suboptimal, except as an excuse for socializing. But I am more comfortable enumerating peeves than I am socializing:

  • “Open” and “data” should sometimes be considered separately. “Open” (as in anyone can use for any purpose, as opposed to facing possible legal threat from copyright, database, patent and other “owners”, even their own governments, and their enforcement apparatuses) is only an expensive policy choice if pursued at too low a level, where rational ignorance and a desire to maintain every form of control and conceivable revenue stream rule. Regardless of “open” policy, or lack thereof, any particular dataset might be worthwhile, or not. But this is the most minor of my annoyances. It is even counterproductive to consider, most of the time — due to the expense of overcoming rational ignorance about “open” policy, and of evaluating any particular dataset, it probably makes a lot of sense to bundle “open data” and agitate for as much data to be made available under as good of practices as possible, and manage expectations when necessary.
  • To minimize the need to make expensive evaluations and compromises, open data needs to be cheap, preferably a side-effect of business as usual. Cheapness requires automation requires software requires open source software, otherwise “open data” institutions are themselves not transparent, are hostage to “enterprise software” companies, and are severely constrained in their ability to help each other, and to be helped by their publics. I don’t think an agitation approach is optimal (I recently attended an OpenOakland meeting, and one of the leaders said something like “we don’t hate proprietary software, but we do love open source”, which seems reasonable) but I am annoyed nevertheless by the lack of priority and acknowledgement given to software by “open data” (and even moreso, open access/content/education/etc) folk in general, strategic discussions (but, in action the Open Knowledge Foundation is better, having hatched valuable open source projects needed for open data). Computation rules all!
  • A “data marketplace” should not be the first suggestion, or even metaphor, for how to realize value from open data — especially not in the offices of the Wikimedia Foundation. Instead, mass collaboration.
  • Open data is neither necessary nor sufficient for better governance. Human institutions (inclusive of “public”, “private”, or any other categorization you like) have been well governed and atrociously governed throughout recorded history. Open data is just another mechanism that in some cases might make a bit of a difference. Another tool. But speaking of managing expectations, one should expect and demand good governance, or at least less atrocity, from our institutions, completely independent of open data!

“Nuance” is a vague joke in lieu of a useful title.

Diocese of Springfield, Illinois ©ensors criticism of its Bishop Paprocki

Sunday, October 7th, 2012

I recognize the rhetorical value of pointing out that copyright can be used for unambiguous censorship but I try to avoid doing so myself: “can be used for” downplays “is”. But the following is too good to let pass.

Bishop Paprocki: Voting Dem...This video is no longer available due to a copyright claim by Diocese of Springfield in Illinois.

Paprocki made a video sermon in which he says that voting Democrat puts one’s soul at risk, while disclaiming telling anyone how to vote.

Brian Tashman posted a criticism of Paprocki’s video, including (I surmise [Update: I was probably wrong; looking at the post again, I’m changing my guess to verbatim excerpt]) a video of himself on video criticizing Paprocki’s statements, including relevant excerpts of Paprocki’s video. I found Tashman’s post and video via a post titled This Week in God, where I noticed the embedded YouTube video frame said:

Bishop Paprocki: Voting Dem…This video is no longer available due to a copyright claim by Diocese of Springfield in Illinois.

I’m going to guess that Tashman’s use of the Paprocki video sermon was very clearly fair use. But even if the entire video was included verbatim, it’d be a zero diff parody. If you want to watch that, the original is linked above, and excerpted and uncut-with-but-grainy-with-additional-watermark versions posted by Paprocki fans remain on YouTube.

I don’t see how Paprocki’s statements could be electioneering, as nobody believes in eternal salvation or damnation, right? In case I’m wrong, some are using the opportunity to call for revoking the Diocese of Springfield’s tax extempt status.

(I grew up in Springfield, Illinois and heard they were getting a curious Catholic bishop last year, one who promotes exorcisms and says that sex abuse lawsuits are the work of the devil — not sex abuse, but lawsuits intended to redress the abuse. That’s Paprocki. But I’m not poking fun of Springfield. Salvatore Cordileone was recently promoted from Oakland bishop to San Francisco archbishop, shortly after demonstrating that the blood of Christ does intoxicate and can result in a DUI. Furthermore, I empathize with Paprocki. If I believed abortion were mass murder and homosexuality an abomination, I would feel compelled to risk mere tax benefits in order to tell people to vote against candidates who I perceived as being for murder and abomination. Indeed, I must tell you to not vote for Romney or Obama, as they both favor mass murder and abomination performed by the U.S. security state: murder, murder, murder, torture, and mass incarceration. But I’m rooting for Obama, as I suspect he favors a little less torture.)

Innocence of _ sharing, remix, and annotation contest

Friday, September 21st, 2012

The term Streisand effect to denote “an attempt to hide or remove a piece of information has the unintended consequence of publicizing the information more widely” rubs me the wrong way (perhaps because I sense homage in critique, in this case perhaps to pop culture fame) but it seems an apt description of reactions to Innocence of Muslims.

I watched (see above) the trailer. If I didn’t know that lots of people were upset about it, I’d class it as camp. I have a hard time not viewing it as such. People claiming it is disgusting and with no artistic merit are expressing some kind of tiresome responsibility and solidarity. Artistically, the trailer seems so comically bad that’s it’s good.

Because it is so bad/good, and now famous, I’m guessing that Innocence will spark lots of “remix culture” (another dreadful term, oh well). Some obvious things to watch for:

  1. Sharing the trailer many places besides YouTube, the centrality of which warps discussions of free speech.
  2. Leaking of the whole (apparently 74 minute) film.
  3. The 74 minute film may not exist, but this doesn’t mean a 74 minute Innocence can’t be created. An early attempt to do so seems to simply loop the trailer and perhaps add in some news footage.
  4. It seems that the trailer is constantly making reference to historic events or religious text passages, but lacking detailed knowledge of the relevant history and books, they all go over my head. Annotations indicating the events and passages referred to, and further material supporting or refuting their interpretations in the film, would be very helpful.
  5. Given the generic campy-actors-hanging-out-in-a-desert scenes that dominate the trailer, and suggested by the use of overdubbing in the original, it shouldn’t be too hard to repurpose the material for films supporting (or opposing) every desert-origin religion (there are many; bonus for any of the vast majority without current adherents) or merely for depicting family feuds and other soap operatic themes set in a desert.
  6. The most currently valuable and pertinent remix would be a historical allegory, in which the marauder/murderer/rapist/torturer figures represent the current U.S.-led terror war.
  7. There are many bad, bad/good, and perhaps some good, desert-religion films which could be used to supplement material from Innocence for any of the above. The ethnicity of the actors is aligned with lots of USian portrayals, especially older ones.
  8. There’s once scene of a man bound to a pole that could be plausibly reinterpreted as the Christ (ignoring that implausibility) and added to The Mashin’ of.

Contest? Winners, should any appear, may receive a gratis link from this post.