Post Semantic Web

Creative Commons hiring CTO

Monday, July 11th, 2011

See my blog post on the CC site for more context.

Also thanks to Nathan Yergler, who held the job for four years. I really miss working with Nathan. His are big shoes to fill, but also his work across operations, applications, standards, and relationships set the foundation for the next CTO to be very successful.

Semantic ref|pingback for re-use notification

Sunday, May 15th, 2011

Going back probably all the way to 2003 (I can’t easily pinpoint, as obvious mail searches turn up lots of hand-wringing about structured data in/for web pages, something which persists to this day) people have suggested using something like trackback to notify that someone has [re]used a work, as encouraged under one of the Creative Commons licenses. Such notification could be helpful, as people often would like to know someone is using their work, and might provide much better coverage than finding out by happenstance or out-of-band (e.g., email) notification and not cost as much as crawling a large portion of the web and performing various medium-specific fuzzy matching algorithms on the web’s contents.

In 2006 (maybe 2005) Victor Stone implemented a re-use notification (and a bit more) protocol he called the Sample Pool API. Several audio remix sites (including ccMixter, for which Victor developed the API; side note: read his ccMixter memoir!), but it didn’t go beyond that, probably in part because it was tailored to a particular genre of sites, and another part because it wasn’t clear how to do correctly, generally, get adoption, sort out dependencies (see hand-wringing above), and resource/prioritize.

I’ve had in mind to blog about re-use notification for years (maybe I already have, and forgot), but right now I’m spurred to by skimming Henry Story and Andrei Sambra’s Friending on the Social Web, which is largely about semantic notifications. Like them, I need to understand what the OStatus stack has to say about this. And I need to read their paper closely.

Ignorance thusly stated, I want to proclaim the value of refback. When one follows a link, one’s user agent (browser) often will send with the request for the linked page (or other resource) the referrer (the page with the link one just followed). In some cases, a list of pages linking to one’s resources that might be re-used can be rather valuable if one wants to bother manually looking at referrers for evidence of re-use. For example, Flickr provides a daily report on referrers to one’s photo pages. I look at this report for my account occasionally and have manually populated a set of my re-used photos largely by this method. This is why I recently noted that the (super exciting) MediaGoblin project needs excellent reporting.

Some re-use discovery via refback could be automated. My server (and not just my server, contrary to Friending on the Social Web; could be outsourced via javascript a la Google Analytics and Piwik) could crawl the referrer and look for structured data indicating re-use at the referrer (e.g., my page or a resource on it is subject or object of relevant assertions, e.g., dc:source) and automatically track re-uses discovered thusly.

A pingback would tell my server (or service I have delegated to) affirmatively about some re-use. This would be valuable, but requires more from the referring site than merely publishing some structured data. Hopefully re-use pingback could build upon the structured data that would be utilized by re-use refback and web agents generally.

After doing more reading, I think my plan to to file the appropriate feature requests for MediaGoblin, which seems the ideal software to finally progress these ideas. A solution also has obvious utility for oft-mooted [open] data/education/science scenarios.

Collaborative Futures 3

Thursday, January 21st, 2010

Day 3 of the Collaborative Futures book sprint and we’re close to 20,000 words. I added another chapter intended for the “future” section, current draft copied below. It is very much a scattershot survey based on my paying partial attention for several years. There’s nothing remotely new apart from recording a favorite quote from my colleague John Wilbanks that doesn’t seem to have been written down before.

Continuing a tradition, another observation about the sprint group and its discussions: an obsession with attribution. A current drafts says attribution is “not only socially acceptable and morally correct, it is also intelligent.” People love talking about this and glomming on all kinds of other issues including participation and identity. I’m counter-obsessed (which Michael Mandiberg pointed out means I’m still obsessed).

Attribution is only interesting to me insofar as it is a side effect (and thus low cost) and adds non-moralistic value. In the ideal case, it is automated, as in the revision histories of wiki articles and version control systems. In the more common case, adding attribution information is a service to the reader — nevermind the author being attributed.

I’m also interested in attribution (and similar) metadata that can easily be copied with a work, making its use closer to automated — Creative Commons provides such metadata if a user choosing a license provides attribution information and CC license deeds use that metadata to provide copy&pastable attribution HTML, hopefully starting a beneficient cycle.

Admittedly I’ve also said many times that I think attribution, or rather requiring (or merely providing in the case of public domain content) attribution by link specifically, is an undersold term of the Creative Commons licenses — links are the currency of the web, and this is an easy way to say “please use my work and link to me!”

Mushon Zer-Aviv continues his tradition for day 3 of a funny and observant post, but note that he conflates attribution and licensing, perhaps to make a point:

The people in the room have quite strong feelings about concepts of attribution. What is pretty obvious by now is that both those who elevate the importance of proper crediting to the success of collaboration and those who dismiss it all together are both quite equally obsessed about it. The attribution we chose for the book is CC-BY-SA oh and maybe GPL too… Not sure… Actually, I guess I am not the most attribution obsessed guy in the room.

Science 2.0

Science is a prototypical example of collaboration, from closely coupled collaboration within a lab to the very loosely coupled collaboration of the grant scientific enterprise over centuries. However, science has been slow to adopt modern tools and methods for collaboration. Efforts to adopt or translate new tools and methods have been broadly (and loosely) characterized as “Science 2.0” and “Open Science”, very roughly corresponding to “Web 2.0” and “Open Source”.

Open Access (OA) publishing is an effort to remove a major barrier to distributed collaboration in science — the high price of journal articles, effectively limiting access to researchers affiliated with wealthy institutions. Access to Knowledge (A2K) emphasizes the equality and social justice aspects of opening access to the scientific literature.

The OA movement has met with substantial and increasing success recently. The Directory of Open Access Journals (see lists 4583 journals as of 2010-01-20. The Public Library of Science’s top journals are in the first tier of publications in their fields. Traditional publishers are investing in OA, such as Springer’s acquisition of large OA publisher BioMed Central, or experimenting with OA, for example Nature Precedings.

In the longer term OA may lead to improving the methods of scientific collaboration, eg peer review, and allowing new forms of meta-collaboration. An early example of the former is PLoS ONE, a rethinking of the journal as an electronic publication without a limitation on the number of articles published and with the addition of user rating and commenting. An example of the latter would be machine analysis and indexing of journal articles, potentially allowing all scientific literature to be treated as a database, and therefore queryable — at least all OA literature. These more sophisticated applications of OA often require not just access, but permission to redistribute and manipulate, thus a rapid movement to publication under a Creative Commons license that permits any use with attribution — a practice followed by both PLoS and BioMed Central.

Scientists have also adopted web tools to enhance collaboration within a working group as well as to facilitate distributed collaboration. Wikis and blogs have been purposed as as open lab notebooks under the rubric of “Open Notebook Science”. Connotea is a tagging platform (they call it “reference management”) for scientists. These tools help “scale up” and direct the scientific conversation, as explained by Michael Nielsen:

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?

Stepping back, what tools like blogs, open notebooks and their descendants enable is filtered access to new sources of information, and to new conversation. The net result is a restructuring of expert attention. This is important because expert attention is the ultimate scarce resource in scientific research, and the more efficiently it can be allocated, the faster science can progress.

Michael Nielsen, “Doing science online”,

OA and adoption of web tools are only the first steps toward utilizing digital networks for scientific collaboration. Science is increasingly computational and data-intensive: access to a completed journal article may not contribute much to allowing other researcher’s to build upon one’s work — that requires publication of all code and data used during the research used to produce the paper. Publishing the entire “resarch compendium” under apprpriate terms (eg usually public domain for data, a free software license for software, and a liberal Creative Commons license for articles and other content) and in open formats has recently been called “reproducible research” — in computational fields, the publication of such a compendium gives other researches all of the tools they need to build upon one’s work.

Standards are also very important for enabling scientific collaboration, and not just coarse standards like RSS. The Semantic Web and in particular ontologies have sometimes been ridiculed by consumer web developers, but they are necessary for science. How can one treat the world’s scientific literature as a database if it isn’t possible to identify, for example, a specific chemical or gene, and agree on a name for the chemical or gene in question that different programs can use interoperably? The biological sciences have taken a lead in implementation of semantic technologies, from ontology development and semantic databsases to inline web page annotation using RDFa.

Of course all of science, even most of science, isn’t digital. Collaboration may require sharing of physical materials. But just as online stores make shopping easier, digital tools can make sharing of scientific materials easier. One example is the development of standardized Materials Transfer Agreements accompanied by web-based applications and metadata, potentially a vast improvement over the current choice between ad hoc sharing and highly bureaucratized distribution channels.

Somewhere between open science and business (both as in for-profit business and business as usual) is “Open Innovation” which refers to a collection of tools and methods for enabling more collaboration, for example crowdsourcing of research expertise (a company called InnoCentive is a leader here), patent pools, end-user innovation (documented especially by Erik von Hippel in Democratizing Innovation), and wisdom of the crowds methods such as prediction markets.

Reputation is an important question for many forms of collaboration, but particularly in science, where careers are determined primarily by one narrow metric of reputation — publication. If the above phenomena are to reach their full potential, they will have to be aligned with scientific career incentives. This means new reputation systems that take into account, for example, re-use of published data and code, and the impact of granular online contributions, must be developed and adopted.

From the grand scientific enterprise to business enterprise modern collaboration tools hold great promise for increasing the rate of discovery, which sounds prosaic, but may be our best tool for solving our most vexing problems. John Wilbanks, Vice President for Science at Creative Commons often makes the point like this: “We don’t have any idea how to solve cancer, so all we can do is increase the rate of discovery so as to increase the probability we’ll make a breakthrough.”

Science 2.0 also holds great promise for allowing the public to access current science, and even in some cases collaborate with professional researchers. The effort to apply modern collaboration tools to science may even increase the rate of discovery of innovations in collaboration!


Wednesday, December 17th, 2008

December 16 marked six years since the release of the first Creative Commons licenses. Most of the celebrations around the world have already taken place or are going on right now, though San Francisco’s is on December 18. (For CC history before 2002-12-16, see video of a panel recorded a few days ago featuring two of CC’s founding board members and first executive director or read the book Viral Spiral, available early next year, though my favorite is this email.)

I’ve worked for CC since April, 2003, though as I say in the header of this blog, I don’t represent any organization here. However, I will use this space to ask for your support of my and others’ work at CC. We’re nearing the end of our fourth annual fall public fundraising campaign and about halfway to our goal of raising US$500,000. We really need your support — past campaigns have closed out with large corporate contributions, though one has to be less optimistic about those given the financial meltdown and widespread cutbacks. Over the longer term we need to steadily decrease reliance on large grants from visionary foundations, which still contribute the majority of our funding.

Sadly I have nothing to satisfy a futarchist donor, but take my sticking around as a small indicator that investing in Creative Commons is a highly leveraged way to create a good future. A few concrete examples follow.

became a W3C Recommendation on October 14, the culmination of a 4+ year effort to integrate the Semantic Web and the Web that everyone uses. There were several important contributors, but I’m certain that it would have taken much longer (possibly never) or produced a much less useful result without CC’s leadership (our motivation was first to describe CC-licensed works on the web, but we’re also now using RDFa as infrastructure for building decoupled web applications and as part of a strategy to make all scientific research available and queryable as a giant database). For a pop version (barely mentioning any specific technology) of why making the web semantic is significant, watch Kevin Kelly on the next 5,000 days of the web.

Wikipedia seems to be on a path to migrating to using the CC BY-SA license, clearing up a major legal interoperability problem resulting from Wikipedia starting before CC launched, when there was no really appropriate license for the project. The GNU FDL, which is now Wikipedia’s (and most other Wikimedia Foundation Projects’) primary license, and CC BY-SA are both copyleft licenses (altered works must be published under the same copyleft license, except when not restricted by copyright), and incompatible widely used copyleft licenses are kryptonite to the efficacy of copyleft. If this migration happens, it will increase the impact of Wikipedia, Creative Commons, free culture, and the larger movement for free-as-in-freedom on the world and on each other, all for the good. While this has basically been a six year effort on the part of CC, FSF, and the Wikimedia Foundation, there’s a good chance that without CC, a worse (fragmented, at least) copyleft landscape for creative works would result. Perhaps not so coincidentally, I like to point out that since CC launched, there has been negative in the creative works space, the opposite of the case in the software world.

Retroactive copyright extension cripples the public domain, but there are relatively unexplored options for increasing the effective size of the public domain — instruments to increase certainty and findability of works in the public domain, to enable works not in the public domain to be effectively as close as possible, and to keep facts in the public domain. CC is pursuing all three projects, worldwide. I don’t think any other organization is placed to tackle all of these thorny problems comprehensively. The public domain is not only tremendously important for culture and science, but the only aesthetically pleasing concept in the realm of intellectual protectionism (because it isn’t) — sorry, copyleft and other public licensing concepts are just necessary hacks. (I already said I’m giving my opinion here, right?)

CC is doing much more, but the above are a few examples where it is fairly easy to see its delta. CC’s Science Commons and ccLearn divisions provide several more.

I would see CC as a wild success if all it ever accomplished was to provide a counterexample to be used by those who fight against efforts to cripple digital technologies in the interest of protecting ice delivery jobs, because such crippling harms science and education (against these massive drivers of human improvement, it’s hard to care about marginal cultural production at all), but I think we’re on the way to accomplishing much more, which is rather amazing.

More abstractly, I think the role of creating “commons” (what CC does and free/open source software are examples) in nudging the future in a good direction (both discouraging bad outcomes and encouraging good ones) is horribly underappreciated. There are a bunch of angles to explore this from, a few of which I’ve sketched.

While CC has some pretty compelling and visible accomplishments, my guess is that most of the direct benefits of its projects (legal, technical, and otherwise) may be thought of in terms of lowering transaction costs. My guess is those benefits are huge, but almost never perceived. So it would be smart and good to engage in a visible transaction — contribute to CC’s annual fundraising campaign.

October and beyond

Thursday, October 9th, 2008

Friday (tomorrow) I’m attending the first Seasteading conference in Burlingame. I blogged about seasteading four years ago. Although the originators of the seastead idea are politically motivated, I’d assign a very low probability to them becoming significantly more politically impactful than some of their inspirations (e.g., micronations and offshore pirate radio, i.e., very marginal). To begin with, the seasteading concept has huge engineering and business hurdles to clear before it could make any impact whatsoever. If the efforts of would be seasteaders lead to the creation of lots more wealth (or even just a new weird culture), any marginal political impact is just gravy. In other words, seasteading is another example of political desires sublimated into useful creation. That’s a very good thing, and I expect the conference to be interesting and fun.

Saturday I’ll be at the Students for Free Culture Conference in Berkeley. You don’t have to be a student to attend. Free culture is a somewhat amorphous concept, but I think an important one. I suspect debates about what free culture means and how to develop and exploit it will be evident at the conference. Some of those are in part about the extent to which political desires should be sublimated into useful creation (I should expand on that in a future post).

October 20-26 I’ll participate in three free culture related conferences back to back.

First in Amsterdam for 3rd COMMUNIA Workshop (Marking the public domain: relinquishment & certification), where I’ll be helping talk about some of Creative Commons’ (I work for, do not represent here, etc.) public domain and related initiatives.

Second in Stockholm for the Nordic Cultural Commons Conference, where I’ll give a talk free culture and the future of cultural production.

Finally in Gothenburg for FSCONS, where I’ll give an updated version of a talk on where free culture stands relative to free software.

In December at MIT, Creative Commons will hold its second technology summit. Nathan Yergler and colleagues have been making the semantic rubber hit the web road pretty hard lately, and will have lots to show. If you’re doing interesting [S|s]emantic Web or open content related development (even better, both), take a look at the CFP.

More than likely I’ll identicate rather than blog all of these.

Table selection, HSA, LugRadio, Music, Photographers, New Media

Monday, April 21st, 2008

A few observations and things learned from the last eight days.

Go to a page with a table, for example this one (sorry, semi-nsfw). Hold down the control key and select cells. How could I not have known about this!? Unfortunately, copy & paste seems to produce tab separated values in a single row even when pasting from mutliple rows in the HTML table (tried with Firefox and Epiphany). Still really useful when you only want to copy one column of a table, but if you want all of the columns, don’t hold down the control key and row boundaries get newlines as they should rather than tabs. (Thanks Asheesh.)

I feel really stupid about this one. I’ve assumed that a (US) was a spend within the year or lose your contributions arrangement, but that’s what a Flexible Spending Account is (I have no predictable medical expenses, so such an account makes no sense for me). A HSA is an investment account much like an IRA, except you can spend from it without penalty upon incurring medical expenses rather than old age. You can only contribute to a HSA while enrolled in a high deductible health insurance plan, which I’ll try to switch to next year. (Thanks Ahrash.)

I saw a few presentations at LugRadio Live USA, in addition to giving one. Miguel de Icaza’s on (content roughly corresponding to this post) and Ian Murdock’s on were both in part about software packaging. Taken together, they make a good case for open source facilitating cross polination of ideas and code across operating system platforms.

Aaron Bockover and Gabriel Burt did a presentation/demo on , showing off some cool track selection/playlist features and talking about more coming. I may have to try switching back to Banshee as my main audio player (from Rhythmbox, with occasional use of Songbird for web-heavy listening or checking on how the program is coming along). Banshee runs on Mono, and both are funded by Novell, which also (though I don’t know how their overall investment compares) has an .

John Buckman gave an entertaining talk on open source and open content (including the slide at right). My talk probably was not entertaining at all, but used the question ‘how far behind [free/open source software] is free/open culture?’ to string together selected observations about the field.

Benjamin Mako Hill did a presentation on Revealing Errors (meant both ways). I found myself wanting to be skeptical of the power of technical errors to expose political/power relationships, but I imagine the concept could use a little hype — there’s definitely something there. The talk made me more sensitive to errors in any case. For example, when I transferred funds from a money market account to checking to pay taxes, an email notice included this (emphasis in original):

Your confirmation number is 0.

Zero? Really? The transaction did go through.

Tuesday I attended the Media Web Meetup V: The Gulf Between NorCal and SoCal, is it so big?, the idea being (in this context pushed by Songbird founder Rob Lord; I presented at the first Media Web Meetup and have attended a few others) that in Northern California entrepreneurs are trying to build new services around music, nearly all stymied by protectionist copyright holders in Southern California. I really did not need to listen to yet another panel asking how the heck is the music recording distribution industry going to use technology to make money, but this was a pretty good one as those go. One of the panelists kept urging technologists to “fix [music] metadata” as if doing so were the key to enabling profit from digital music. I suppressed the urge to sound a skeptical note, as investing more in metadata is one of the least harmful things the industry might do. Not that I don’t think metadata is great or anything.

Thursday evening I was on a ‘Copyright 2.0’ panel put on by the American Society of Media Photographers Northern California. I thought my photo selection for my first slide was pretty clever. No, copyright expansion is not always good for the interests of professional photographers. The other panelists and the audience were actually more open minded (both meanings) than I expected, and certainly realistic. The photographer on the panel even stated the obvious (my paraphrase from memory): new technology has allowed lots of people to explore their photographry talents who would otherwise have been unable to, and maybe some professional photographers just aren’t that good and should find other work. My main takeway from the panel is that it is very difficult for an independent photographer to successfully pursue unauthorized users in court. With the sometime exception of one, the other panelists all strongly advised photographers to avoid going to court except as a last resort, and even then, first doing a rational calculation of what the effort is likely to cost and gain. The best advice was probably to try to turn unauthorized users into clients.

Friday evening I went to San Jose to be on a panel about New Media Artists and the Law. Unlike Thursday’s panel, this one was mostly about how to use and re-use rather than how to prevent use. This (and some nostalgia) made me miss living in Silicon Valley — I lived in Sunnyvale two years (2003-2005) and San Jose (2005-2006) before moving back to San Francisco. Nothing really new came up, but I did enjoy the enthusiasm of the other panelists and the audience (as I did the previous day).

Staturday I went to Ubuntu Restaurant in Napa, which apparently does vegetable cuisine but does not market itself as vegetarian. I think that’s a good idea. The food was pretty good.

I’ve been listening to Hazard Records 59 and 60: Calida Construccio by various and Unhazardous Songs by Xmarx. Lovely Hell (mp3) from the latter is rather poppy.


Monday, February 18th, 2008

There are a number of fun things about a sketch of Uberfact: the ultimate social verifier. The first is that the post could be written without mentioning . The second is that the proposed project is a nice would-be example of political desires sublimated entirely into creating useful and voluntary tools. Third, Mencius Moldbug is a fun writer.

Something like Uberfact should absolutely be built, though I’m far from certain it would hit a sweet spot. It may be too decentralized or too centralized or both. All points from enhancing Wikipedia to the Semantic Web (with Uberfact somewhere between) are complementary and well worth pursuing, particularly if that pursuit displaces malinvestment in politics.

Relatedly, but no time to explain why:

Requirements for community funding of open source

Saturday, November 24th, 2007

Last month another site for aggregating donation pledges to open source software projects launched.

I’m not sure there’s anything significant that sets Cofundos apart from microPledge featurewise. Possibly a step where bidders (pledgers) vote on which developer bid to accept. However I’m not certain how a developer is chosen on microPledge — their FAQ says “A quote will be chosen that delivers the finished and paid product to the pledgers most quickly based on their current pledging rate (not necessarily the shortest quote).” microPledge’s scheme for in progress payments may set it apart.

In terms of marketing and associations, Cofundos comes from the Agile Knowledge Engineering and Semantic Web research group at the University of Leipzig, producers of , about which I’ve written. Many of the early proposed projects are directly related to AKSW research. Their copyright policy is appreciated.

microPledge is produced by three Christian siblings who don’t push their religion.

Cofundos lists 61 proposed projects after one month, microPledge lists about 160 after about three and a half months. I don’t see any great successes on either site, but both are young, and perhaps I’m not looking hard enough.

Cofundos and microPledge are both welcome experiments, though I don’t expect either to become huge. On the other hand, even modest success would set a valuable precedent. In that vein I’ve been pretty skeptical about the chances of Fundable, they seem to have attracted a steady stream of users. Although most projects seem to be uninteresting (pledges for bulk purchases, group trips, donations to an individual’s college fund, etc), some production of public goods does seem to being funded, including several film projects in the small thousands of dollars range. Indeed, “My short film” is the default project name in their form for starting a project.

It seems to me that creating requirements and getting in front of interested potential donors are the main challenges for sites focused on funding open source software like Cofundos and microPledge (both say they are only starting with software). Requirements are just hard, and there’s little incentive for anyone to visit an aggregator who hasn’t aggregated anything of interest.

I wonder if integrating financial donations into project bug tracking systems would address both challenges? Of course doing so would have risks, both of increasing bureaucracy around processing bugs and feature requests, necessity of implementing new features (and bugs) in the relevant bug tracking software, and altering the incentives of volunteer contributors.

Via Open Knowledge Foundation Blog.

bar : sex :: social networking site : spam

Thursday, November 22nd, 2007

Brad Templeton on Facebook apps that aggressively request access to your private data (relatedly Templeton on the economics of privacy and identity is a must read) and spam your friends:

Apps are not forced to do this. A number of good apps will let people see the data, even put it in feeds, without you having to “install” and thus give up all your privacy to the app. What I wish is that more of us had pushed back against the bad ones. Frankly, even if you don’t care about privacy, this approach results in lots of spam which is trying to get you to install apps. Everybody thinks having an app with lots of users is going to mean bucks down the road, with Facebook valued as highly as it is.

But a lot of it is plain old spam, but we’re tolerating it because it’s on Facebook. (Which itself is no champion. They have an extremely annoying email system which sends you an e-mail saying, “You got a message on facebook, click to read it” rather than just including the text of the message. To counter this, there is an “E-mail me instead” application which tries to make it easier for people to use real E-mail. And I recently saw one friend add the text “Use E-mail not facebook message” in her profile picture.)

The title of this post was my first Facebook status message earlier this year. In other words, social networking sites are all about lowering social boundaries. I am completely comfortable sending messages to people I barely know (if that) on Facebook that I would only consider (and often not) send to close friends and regular correspondents via email or instant messaging.

Ironically social networks could be used to fight spam and otherwise bootstrap reputation systems. I am mildly surprised that although trust is perhaps the most interesting feature of social networks, as far as I know nobody has done anything interesting with them (at least social networking sites) in this respect. An occasional correspondent even suggested recently that reputation is a kind of anti-feature for social networking sites, and reputation features tend to be hidden or turned off.

My other (unoriginal, but older) observation about social networking sites is that while at first blush the sector should be winner-take-all driven by network effects, but instead we’ve already seen a few leaders surpassed, and I highly doubt Facebook will take all. I have two explanations. First, the sites don’t have much power to lock users in, even though it is hard to export data — users have contact information for remotely valuable contacts outside the site, in address books, buddy lists, and email archives, and can recreate their network on a new site relatively easily. Second, social networking sites don’t yet have a killer application. Although Facebook has allowed many third party apps on its platform, I have yet to see one that I would miss, and very few I return to. I doubt I’d miss Facebook (or any other social networking site) much period if I were banned from it (I know that many students would disagree about Facebook and musicians about MySpace).

Semantic Web Web Web

Wednesday, November 21st, 2007

The and particularly its efforts do great, valuable work. I have one massive complaint, particularly about the latter: they ignore the Web at their peril. Yes, it’s true, as far as I can tell (but mind that I’m one or two steps removed from actually working on the problems), that the W3C and Semantic Web activities do not appreciate the importance of nor dedicate appropriate resources to the Web. Not just the theoretical Web of URIs, but the Web that billions of people use and see.

I’m reminded of this by Ian Davis’ post Is the Semantic Web Destined to be a Shadow?:

My belief is that trust must be considered far earlier and that it largely comes from usage and the wisdom of the crowds, not from technology. Trust is a social problem and the best solution is one that involves people making informed judgements on the metadata they encounter. To make an effective evaluation they need to have the ability to view and explore metadata with as few barriers as possible. In practice this means that the web of data needs to be as accessible and visible as the web of documents is today and it needs to interweave transparently. A separate, dry, web of data is unlikely to attract meaningful attention, whereas one that is a full part of the visible and interactive web that the majority of the population enjoys is far more likely to undergo scrutiny and analysis. This means that HTML and RDF need to be much more connected than many people expect. In fact I think that the two should never be separate and it’s not enough that you can publish RDF documents, you need to publish visible, browseable and engaging RDF that is meaningful to people. Tabular views are a weak substitute for a rich, readable description.