Post Bitzi

z3R01P

Monday, October 14th, 2013

Video from my conversation with Stephanie Syjuco on “intellectual property & the future of culture” at ZERO1 Garage 11 months ago is available at YouTube and archive.org (direct link to theora encoding).

As expected (see my pre-event post) the setting was great: nice space, thoughtful, well-executed and highly appropriate installation. I enjoyed the conversation; perhaps you will too.

With more time it would’ve been great to talk about some of Syjuco’s other works, many of which deal more or less directly with copying (see also interviews with Syjuco). I don’t think either of us even used the word appropriation. Nor the term “open source”, despite being in the installation title — for example, why is intersection of formal Open Source (and similar legally/voluntarily constructed commons) and art, appropriation or otherwise, vanishingly small?

ZERO1 Garage presently holds another “IP” related exhibition, Patent Pending, featuring “artworks by contemporary artists that have either resulted from, or led to, a patent that the artist has either received a patent for or is patent pending.” Sounds problematic! If you’re anywhere near San Jose, I recommend checking out the exhibition and one of its upcoming events — October 17 Post-Natural Properties: The Art of Patented Life Forms and November 1 Does the U.S. Patent System stifle innovation? As I say in the video above, and elsewhere, I hope they also consider equality and freedom.

Notability, deletionism, inclusionism, ∞³

Tuesday, January 4th, 2011

For the past couple years there has been an in English Wikipedia (archived version). It is an ok article. Some that I’d include isn’t, and some of what is seems kind of tangential, e.g., talking at a NASA event, that besides a citation, netted spending the day with an unholy mix of the usual social media suspects and entirely retrograde “we gotta put man humans into space because it makes me feel proud to be an American and my daughter might do her math homework!!!” boosters (get real: go robots!) and a sketch. However, overall it is fairly evocative, even the NASA event part. It would be uncool of me to edit it directly, and I’ve been curious to see how it would be improved, translated, vandalized, or deleted, so I haven’t made suggestions. It has mostly just sat there, though it has been curious to see the content copied in various places where Wikipedia content gets copied, and that a fair proportion of the people I meet note that I “have a Wikipedia page” — that’s kind of the wrong way to think about it (Wikipedia articles have subjects, not owners), but good to know that people can use web search (and that I can tend toward the pedantic).

!#@!@^% deletionists are ruining Wikipedia. They’ll be the first against the wall when the revolution comes.
http://memesteading.com/2010/03/15/dialectical-inclusionism/

The one thing that I have said about the article about me on English Wikipedia, until now, has been this, on my (not precisely, but moreso “mine”) user page on English Wikipedia: “I am the subject of Mike Linksvayer, which I would strongly advocate deleting if I were a deletionist (be my guest).” I’ve thought about pulling some kind of stunt around this, for example, setting up a prediction market contract on whether the article about me would be deleted in a given timeframe, but never got around to it. Anyway, last week someone finally added an Articles for Deletion notice to the article, which sets up a process involving discussion of whether the article ought be deleted (crickets so far). When rough consensus is reached, an admin will delete the article, or the notice will be removed.

I’m not a fan of deletionism (more below), but given the current rules around notability, I am either somewhat questionable as an English Wikipedia article subject (using the general, easy to interpret charitably summary of notability: “A person is presumed to be notable if he or she has received significant coverage in reliable secondary sources that are independent of the subject.”) to unquestionably non-notable (any less charitable interpretation, which presumably any “deletionist” would use, thus my user page statement). The person who added the Articles for Deletion notice may not have done any research beyond what is already linked in the article (more on that general case below), but I must admit, his critique of the citations in the article, are fairly evocative, just as the article is:

We have three sources from Creative Commons (primary), a paragraph in a CNET news article where he does his job and encourages scientists to use CC licenses, one IHT article about veganism that mentions him for a couple of paragraphs, and a link to his Wikipedia userpage. That is not enough for notability, in my opinion.

The IHT (actually first in the NYT) article was about calorie restriction, not veganism, but that’s a nitpick. Most of the “media” items my name has appeared in are indeed about Creative Commons, i.e., me doing my job, not me as primary subject, or in a few cases, about calorie restriction, with me as a prop. Or they’re blogs — ok that one is even less notable than most blogs, but at least it’s funny, and relevant — and podcasts. The only item (apart from silly blog posts) that I’ve appeared in that I’m fond of and would be tickled if added as a reference if the current article about me squeaks by or some future article in the event I as a subject become a no-brainer (clearly I aim to, e.g., “[make] a widely recognized contribution that is part of the enduring historical record in his or her specific field”, but even more clearly I haven’t achieved this) is in Swedish (and is still about me doing my job, though perhaps going off-message): check out an English machine translation.

I’m not a fan of deletionism, largely because, as I’ve stated many times, thinking of Wikipedias as encyclopedias doesn’t do the former justice — Wikipedia has exploded the “encyclopedia” category, and that’s a wonderful thing. Wikipedias (and other Wikimedia projects, and projects elsewhere with WikiNature) need to go much further if freedom is to win — but I’m partisan in this, and can appreciate that others appreciate the idea that Wikipedias stick close to the category defined by print encyclopedias, including strong limits on what might be considered encyclopedic.

It also strikes me that if Wikimedia movement priorities include increasing readership and participation that inclusionism is the way to go — greatly increase the scope of material people can find and participate in building. However, I’m not saying anything remotely new — see Deletionism and inclusionism in Wikipedia.

Although I’m “not a fan” I don’t really know how big of a problem deletionism is. In my limited experience, dealing with an Articles for Deletion notice on an article I’ve contributed to is a pain, sometimes motivates substantially improving the article in question, and is generally a bummer when a useful, factual article is deleted — but it isn’t a huge part of the English Wikipedia editing experience.

Furthermore, reading guidelines on notability closely again, they’re more reasonable than I recall — that is, very reasonable, just not the radical inclusionism I prefer. To the extent that deletionism is a problem, my guess now is that it could be mitigated by following the guidelines more closely, not loosening them — start with adding a {{notability}} tag, not an Articles for Deletion notice, ask for advice on finding good sources, and make a good faith effort to find good sources — especially for contemporary subjects, it’s really simple with news/web/scholar/book/video search from Google and near peers. I’m sure this is done in the vast majority of cases — still, in the occasional case when it isn’t done, and initial attempts to find sources and improve an article are being made during an Articles for Deletion discussion, is kind of annoying.

I also wrote some time ago when thinking about notability the not-to-be-taken-very-seriously Article of the Unknown Notable, which I should probably move elsewhere.

The delicious “dialectical inclusionism” quote above is from Gordon Mohr. Coincidentally, today he announced ∞³, a project “to create an avowedly inclusionist complement to Wikipedia”. There’s much smartness in his post, and this one is already long, so I’m going to quote the entire thing:

Introducing Infinithree (“∞³”)

Wikipedia deletionism is like the weather: people complain, but nobody is doing anything about it. 

I’d like to change that, working with others to create an avowedly inclusionist complement to Wikipedia, launching in 2011. My code name for this project is ‘Infinithree’ (‘∞³’), and this blog exists to collaborate on its creation.

Why, you may ask?

I’ll explain more in future posts – but in a nutshell, I believe deletionism erases true & useful reference knowledge, drives away contributors, and surrenders key topics to cynical spammy web content mills.

If you can already appreciate the value and urgency of this sort of project, I’m looking for you. Here are the broad outlines of my working assumptions:

Infinithree will use the same open license and a similar anyone-can-edit wiki model as Wikipedia, but will discard ‘notability’ and other ‘encyclopedic’ standards in favor of ‘true and useful’.

Infinithree is not a fork and won’t simply redeploy MediaWiki software with inclusionist groundrules. That’s been tried a few times, and has been moribund each time. Negative allelopathy from Wikipedia itself dooms any almost-but-not-quite-Wikipedia; a new effort must set down its roots farther afield.

Infinithree will use participatory designs from the social web, rather than wikibureacracy, to accrete reliable knowledge. Think StackOverflow or Quora, but creating declarative reference content, rather than interrogative transcripts.

Sound interesting? Can you help? Please let me know what you think, retweet liberally, and refer others who may be interested.

For updates, follow @_infinithree on Twitter (note the leading underscore) or @infinithree on Identi.ca.

Infinithree is already very interesting as a concept, and I’m confident in Gordon’s ability to make it non-vapor and extremely interesting (I was one of his co-founders at the early open content/data/mass collaboration service Bitzi — 10 years ago, hard to believe). There is ample opportunity to try different mass collaboration arrangements to create free knowledge. Many have thought about how to tweak Wikipedia culture or software to produce different outcomes, or merely to experiment (I admit that too much of my plodding pondering on the matter involves the public domain↔strong copyleft dimension). I’m glad that Gordon intends ∞³ to be different enough from Wikipedia such that more of the vast unexplored terrain gets mapped, and hopefully exploited. As far as I know is probably the most relevant attempt so far. May there be many more.

MIN US$750k for NIN

Tuesday, March 4th, 2008

The $300 “ultra deluxe edition” of , limited to 2500 copies, sold out in a couple days (I believe released Sunday, no longer available this morning). There are some manufacturing costs, but they don’t appear to be using any precious materials. So if an artist typically makes $1.60 on a $15.99 CD sale, profit from sales of the limited edition already matches profit from a CD selling hundreds of thousands of copies.

Then there are non-limited sales of a $75 merely “deluxe edition”, $10 CD, and $5 download, and whatever other products NIN comes up with around Ghosts.

The ultra deluxe success seems to me to validate the encouragement by some to pursue large revenue from rabid fans and collectors willing and able to pay for personalization, authenticity, embodiment, etc., rather than attempting to suppress zero cost distribution to the masses.

Speaking of distribution, click on the magnet to search for a fully legal P2P download of Ghosts, assuming you have the right filesharing software installed.

nin_ghosts_I-IV_mp3.zip (283.7 MB)

Sanhattan threatens former Bitzi offices

Tuesday, December 26th, 2006

Two 1,200 foot towers are planned for the northwest corner of 1st and Mission Streets in San Francisco, site of a few run down buildings, one of which Bitzi had offices in for most of 2001 (spruced up some since then). Will San Francisco planners allow rapacious developers to destroy history? I hope so. Onward to Sanhattan!

Via SF Cityscape.

LimeWire Filtering & Blog

Wednesday, March 29th, 2006

Just noticed that the current beta (4.11.0) includes optional copyright filtering. See the features history and brief descriptions for users and copyright owners:

In the Filtering System, copyright owners identify files that they don’t want shared and submit them for inclusion in a public list. LimeWire then consults this list and stops users from downloading the identified files “filtering” them from the sharing process.

If you sign up for an account as a copyright owner you can submit files (with file name, file size, SHA1 hash, creator, collection, description) for filtering. Users can turn the filter on and off via a preference.

LimeWire.org now features a blog with pretty random content. I notice that another PHP Base32 function (which makes a whole lot more sense than the one included in Bitcollider-PHP — I swear PHP’s bitwise operators weren’t giving correct results and worked around that, but was probably insane) is available with a hint that someone is building an “open source Gnutella Server in PHP5.”

Remember that LimeWire is Open Source P2P and thus pretty trustworthy — and you can always fork.

MusicBrainzDNS

Sunday, March 12th, 2006

Congratulations to for taking care of a longstanding substandard feature — a proprietary and not very scalable acoustic fingerprinting technology (Relatable TRM). Today MusicBrainz announced integration with MusicIP’s MusicDNS fingerprinting service, full details in the announcement.

Funny thing, I just cleared all the (old, mostly gathered in 2001) TRM tags from a couple weeks ago.

Creative Commons license tracking is also now enabled at both MusicBrainz and MusicIP, no doubt more on that at the CC weblog in the near future.

Belated congratulations to MusicBrainz for signing their first commercial deal in January.

I wrote some about MusicBrainz about 15 months ago. I predict the next 15 months will be very good for what I’ll call “open music infrastructure.”

Bitzi as Tagging 1.0 Metacrap

Sunday, March 12th, 2006

On the Tagging 2.0 panel just cited as (more or less) a non-successful predecessor to Tagging 2.0 applications, saying something like “things like Bitzi (mumble) Cory Doctorow called .”

Vander Wal recently explained in a comment at Joho the Blog:

The big thing that was different, from say Bitzi, was people tagging information in their own vocabulary for their own reuse. Tagging information for others as a priority seems to make it far less accurate as a person may not understand the terms they are using (well understand them as other may).

He’s right. There’s too little private benefit to “tagging” at Bitzi, largely because what interfaces to what you have individually contributed are lame to the extent they exist. The Bitzi use case is rather different from and but it can learn a lot from them.

CodeCon Friday

Saturday, February 11th, 2006

This year Gordon Mohr had the devious idea to do preemtive reviews of CodeCon presentations. I’ll probably link to his entries and have less to say here than last year.

Daylight Fraud Prevention. I missed most of this presentation but it seems they have a set of non-open source Apache modules each of which could make phishers and malware creators work slightly harder.

SiteAdvisor. Tests a website’s evilness by downloading and running software offered by the site and filling out forms requesting an email address on the site. If virtual Windows machine running downloaded software becomes infected or email address set up for test is inundated with spam the site is considered evil. This testing is mostly automated and expensive (many Windows licenses). Great idea, surprising it is new (to me). I wonder how accurate evil readings one could obtain at much lower cost by calculating a “SpamRank” for sites based on links found in email classified as spam and links found on pages linked to in spams? (A paper has already taken the name SpamRank, though at a five second glance it looks to propose tweaks to make PageRank more spam-resistant rather than trying to measure evil.) Fortunately SiteAdvisor says that both bitzi.com and creativecommons.org are safe to use. SiteAdvisor’s data is available for use under the most restrictive Creative Commons license — Attribution-NonCommercial-NoDerivs 2.5.

VidTorrent/Peers. Streaming joke. Peers, described as a “toolkit for P2P programming with continuation passing style” I gather works syntactically as a Python code preprocessor, could be interesting. I wish they had compared Peers to other P2P toolkits, e.g., .

Localhost. A global directory shared with a modified version of the BitTorrent client. I tried about a month ago. Performance was somewhere between abysmal and nonexistent. BitTorrent is fantastic for large popular files. I’ll be surprised if localhost’s performance, which depends on transferring small XML files, ever reaches mediocrity. They’re definitely going away from BitTorrent’s strengths by uploading websites into the global directory as lots of small files (I gather). The idea of a global directory is interesting, though tags seem a more fruitful navigation method than localhost’s hierarchy.

Truman. A “sandnet” for investigating suspected malware in. Faux services (e.g., DNS, websites) can be scripted to elicit the suspected malware’s behavior, and more.

Redefining light and dark

Monday, November 28th, 2005

The wily Lucas Gonze is at it again, defining ‘lightnet’ and ‘darknet’ by example, without explanation. The explanation is so simple that it probably only subtracts from Gonze’s [re]definition, but I’ll play the fool anyhow.

Usually darknet refers to (largely unstoppable) friend-to-friend information sharing. As the name implies, a darknet is underground, or at least under the radar of those who want to prohibit certain kinds of information sharing. (A BlackNet doesn’t require friends and the radar doesn’t work, to horribly abuse that analogy.)

Lightnet, as far as I know, is undefined in this context.*

Anyway, Lucas’ definition-by-example lumps prohibited sharing (friend to friend as well as over filesharing networks) and together as Darknet. Such content is dark to the web. It can’t be linked to, or if it can be, the link will be to a name,** not a location, thus you may not be able to obtain the content (filesharing), or you won’t be able to view the content (DRM).

Lightnet contnet is light to the web. It can be linked to, retrieved, and viewed in the ways you expect (and by extension, searched for in the way you expect), no law breaking or bad law making required.

* Ross Mayfield called iTunes a lightnet back in 2003. Lucas includes iTunes on the dark side. I agree with Lucas’ categorization, though Ross had a good point, and in a slightly different way was contrasting iTunes with both darknets and hidebound content owners.

** Among other things, I like to think of magnet links and as attempting to bridge the gap between the web and otherwise shared content. Obviously that work is unfinished. As is making multimedia work on the web. I think that’s the last time I linked to Lucas Gonze, but he’s had plently of crafty posts between then and now that I highly recommend following.

SemWeb not by committee

Sunday, March 13th, 2005

At SXSW today Eric Meyer gave a talk on Emergent Semantics. He humorously described emergent as a fancy way of saying grassroots, groundup (from the bottom or like ground beef), or evolutionary. The talk was about adding rel attributes to XHTML <a> elements, or the lowercase semantic web, or Semantic XHTML, of which I am a fan.

Unfortunately Eric made some incorrect statements about the uppercase Semantic Web, or RDF/RDFS/OWL, of which I am also a fan. First, he implied that the lowercase semantic web is to the Semantic Web as evolution is to intelligent design, the current last redoubt of apolgists for theism.

Very much related to this analogy, Eric stressed that use of Semantic XHTML is ad hoc and easy to experiment with, while the Semantic Web requires getting a committee to agree on an ontology.

Not true! Just using rel="foo" is equivalent to using a http://example.com/foo RDF property (though the meaning of the RDF property is better defined — it applies to a URI, while the application of the implicit rel property is loose).

In the case of more complex formats, an individual can define something like hCard (lowercase) or vCard-RDF (uppercase).

No committee approval is required in any of the above examples. vCard-RDF happens to have been submitted to the W3C, but doing so is absolutely not required, as I know from personal experience at Bitzi and Creative Commons, both of which use RDF never approved by committee.

At best there may be a tendency for people using RDF to try to get consensus on vocabulary before deployment while there may be a tendency for people using Semantic XHTML to throw keywords at the wall and see if they stick (however, Eric mentioned that the XFN (lowercase) core group debated whether to include me in the first release of their spec). Neither technology mandates either approach. If either of these tendencies to exist, they must be cultural.

I think there is value in the ad hoc culture and more importantly closeness of Semantic XHTML assertions to human readable markup of the lowercase semantic web and the rigor of the uppercase Semantic Web.

It may be useful to transform a rel="" assertions to RDF assertions via GRDDL or a GRDDL-inspired XMDP transformation.

I will find it useful to bring RDF into XHTML, probably via RDF/A, which I like to call Hard Core Semantic XHTML.

Marc Canter as usual expressed himself from the audience (and on his blog). Among other things Marc asked why Eric didn’t use the word metadata. I don’t recall Eric’s answer, but I commend him for not using the term. I’d be even happier if we could avoid the word semantic as well. Those are rants for another time.

Addendum: I didn’t make it to the session this afternoon, but Tantek Çelik‘s slides for The Elements of Meaningful XHTML are an excellent introduction to Semantic XHTML for anyone familiar with [X]HTML.

Addendum 20050314: Eric Meyer has posted his slides.