Post Computers

Open policy for a secure Internet-N-Life

Saturday, June 28th, 2014

(In)Security in Home Embedded Devices Jim Gettys says software needs to be maintained for decades considering where it is being deployed (e.g., embedded in products with multi-decade lifetimes, such as buildings) and the criticality of some of that software, an unpredictable attribute — a product might become unplanned “infrastructure” for example if it is widely deployed and other things come to depend on it. Without maintenance, including deployment of updates in the field, software (and thus systems it is embedded in) becomes increasingly insecure as vulnerabilities are discovered (cites a honeymoon period enjoyed by new systems).

This need for long-term maintenance and field deployment implies open source software and devices that users can upgrade — maintenance needs to continue beyond the expected life of any product or organization. “Upgrade” can also mean “replace” — perhaps some kinds of products should be more modular and with open designs so that parts that are themselves embedded systems can be swapped out. (Gettys didn’t mention, but replacement can be total. Perhaps “planned obsolescence” and “throwaway culture” have some security benefits. I suspect the response would be that many things continue to be used for a long time after they were planned to be obsolete and most of their production run siblings are discarded.)

But these practices are currently rare. Product developers do not demand source from chip and other hardware vendors and thus ship products with “binary blob” hardware drivers for Linux kernel which cannot be maintained, often based on kernel years out of date when product is shipped. Linux kernel near-monoculture for many embedded systems, increasing security threat. Many problems which do not depend on hardware vendor cooperation, ranging from unintentionally or lazily not providing source needed for rest of system, to intentionally shipping proprietary software, to intentionally locking down device to prevent user updates. Product customers do not demand long-term secure devices from product developers. There is little effort to fund commons-oriented embedded development (in contrast with Linux kernel and other systems development for servers, which many big companies fund).

Gettys is focused on embedded software in network devices (e.g., routers) as network access is critical infrastructure much else depends on, including the problem at hand: without network access, many other systems cannot be feasibly updated. He’s working on CeroWrt a cutting edge version of OpenWrt firmware, either of which is several years ahead of what typically ships on routers. A meme Gettys wishes to spread, the earliest instance of which I could find is on cerowrt-devel, a harsh example coming the next week:

Friends don’t let friends run factory firmware.

Cute. This reminds me of something a friend said in a group discussion that touched on security and embedded in body (or perhaps it was mind embedded in) systems, along the lines of “I wouldn’t run (on) an insecure system.” Or malware would give you a bad trip.

But I’m ambivalent. Most people, thus most friends, don’t know what factory firmware is. Systems need to be much more secure (for the long term, including all that implies) as shipped. Elite friend advice could help drive demand for better systems, but I doubt “just say no” will help much — its track records for altering mass outcomes, e.g., with respect to proprietary software or formats, seems very poor.

In Q&A someone asked about centralized cloud silos. Gettys doesn’t like them, but said without long-term secure alternatives that can be deployed and maintained by everyone there isn’t much hope. I agree.

You may recognize open source software and devices that users can upgrade above as roughly the conditions of GPL-3.0. Gettys mentioned this and noted:

  • It isn’t clear that copyright-based conditions are effective mechanism for enforcing these conditions. (One reason I say copyleft is a prototype for more appropriate regulation.)
  • Of “life, liberty, and pursuit of happiness”, free software has emphasized the latter two, but nobody realized how important free software would be for living one’s life given the extent to which one interacts with and depends on (often embedded) software. In my experience people have realized this for many years, but it should indeed move to the fore.

Near the end Gettys asked what role industry and government should have in moving toward safer systems (and skip the “home” qualifier in the talk title; these considerations are at least as important for institutions and large-scale infrastructure). One answer might be in open policy. Public, publicly-interested, and otherwise coordinated funders and purchasers need to be convinced there is a problem and that it makes sense for them to demand their resources help shift the market. The Free Software Foundation’s Respects Your Freedom criteria (ignoring the “public relations” item) is a good start on what should be demanded for embedded systems.

Obviously there’s a role for developers too. Gettys asked how to get beyond the near Linux kernel monoculture, mentioning BSD. My ignorant wish is that developers wanting to break the monoculture instead try to build systems using better tools, at least better languages (not that any system will reduce the need for security in depth).

Here’s to a universal, secure, and resilient web and technium. Yes, these features cost. But I’m increasingly convinced that humans underinvest in security (not only computer, and at every level), especially in making sure investments aren’t theater or worse.

WWW next 25: Universal, Secure, Resilient?

Wednesday, March 12th, 2014

Today folks seem to be celebrating the 25th anniversary of a 1989 proposal for what is now the web — implementation released to the public in August, 1991.

Q&A with web inventor Timothy Berners-Lee: 25 years on, the Web still needs work.

The web is pretty great, much better than easily imagined alternatives. Three broad categories it could improve in:

  • Universality. All humans should be able to access the web, and this should be taken to include being able to publish, collaborate, do business, and run software on the web, in any manner, in any language or other interface. Presently, billions aren’t on the net at all, activity outside of a handful of large services is very expensive (in money, expertise, or marketing), and machine translation and accessibility are very limited.
  • Security. All of the above, securely, without having to understand anything technical about security, and with lots of technical and cultural guards against technical and non-technical attacks of all kinds.
  • Resilience. All of the above, with minimal interruption and maximal recovery from disaster, from individual to planetary scale.

Three pet outcomes I wish for:

  • Collective wisdom. The web helps make better decisions, at all scales.
  • Commons dominance. Most top sites are free-as-in-freedom. Presently, only Wikipedia (#5) is.
  • Freedom, equality, etc.

Two quotes from the Berners-Lee Q&A that are on the right track:

Getting a nice user interface to a secure system is the art of the century.

Copyright law is terrible.

Greatest month in history?

Tuesday, December 17th, 2013

Yesterday, 11 years ago, today, 22 years and 4 months. Recently I noticed an observation in slides by Glyn Moody on Open Acccess (related editorial):

25 August 1991 – Finnish student, Linus Torvalds, announced the start of Linux
23 August 1991 – World Wide Web released publicly
14 August 1991 – Launch of arXiv

Moody titled the slide with above items “greatest week in history?” — arXiv is listed as 19 August, which I think must be a transcription error. Still, perhaps the greatest month in some assessment which grants something like knowledge commons supreme importance; perhaps future conventional wisdom. Those three are a nice mix of software, protocols, literature, data, and infrastructure.

collapsed broadcast towerThe world’s tallest broadcast tower collapsed 8 August 1991 to make way for somewhat less centralized communications.

Linux and the Web make Wikipedia’s short list of August 1991 events, which is dominated by the beginning of the final phase of the dissolution of the Soviet Union. (I have an old post which is a tiny bit relevant to tying this all together, however unwarranted that may be.)

arXiv isn’t nearly as well known to the general public as Linux, which isn’t nearly as well known as the Web. In some ways arXiv is still ahead of its time. The future takes a long time to be distributed — Moody’s cover slide is titled “half a revolution”. Below I’ve excepted a few particularly enjoyable paragraphs and footnotes from It was twenty years ago today… by arXiv founder Paul Ginsparg (who, Moody notes, knew of GNU via a brother). I’ve bolded a couple phrases and added one link for additional entertainment value. The whole 9 page paper (PDF) is worth a quick read (I can’t help but notice and enjoy the complete absence of two words: “copyright” and “license”).

The exchange of completed manuscripts to personal contacts directly by email became more widespread, and ultimately led to distribution via larger email lists.13 The latter had the potential to correct a significant problem of unequal access in the existing paper-preprint distribution system. For purely practical reasons, authors at the time used to mail photocopies of their newly minted articles to only a small number of people. Those lower in the food chain relied on the beneficence of those on the A-list, and aspiring researchers at non-elite institutions were frequently out of the privileged loop entirely. This was a problematic situation, because, in principle, researchers prefer that their progress depends on working harder or on having some key insight, rather than on privileged access to essential materials.

By the spring of 1991, I had moved to the Los Alamos National Laboratory, and for the first time had my own computer on my desk, a 25 MHz NeXTstation with a 105 Mb hard drive and 16 Mb of RAM. I was thus fully cognizant of the available disk and CPU resources, both substantially larger than on a shared mainframe, where users were typically allocated as little as the equivalent of 0.5 Mb for personal use. At the Aspen Center for Physics, in Colorado, in late June 1991, a stray comment from a physicist, concerned about emailed articles overrunning his disk allocation while traveling, suggested to me the creation of a centralized automated repository and alerting system, which would send full texts only on demand. That solution would also democratize the exchange of information, leveling the aforementioned research playing field, both internally within institutions and globally for all with network access.

Thus was born xxx.lanl.gov,18 initially an automated email server (and within a few months also an FTP server), powered by a set of csh scripts.19 It was originally intended for about 100 submissions per year from a small subfield of high-energy particle physics, but rapidly grew in users and scope, receiving 400 submissions in its first half year. The submissions were initially planned to be deleted after three months, by which time the pre-existing paper distribution system would catch up, but by popular demand nothing was ever deleted. (Renamed in late 1998 to arXiv.org, it has accumulated roughly 700,000 total submissions [mid Aug 2011], currently receives 75,000 new submissions per year, and serves roughly one million full text downloads to about 400,000 distinct users per week. The system quickly attracted the attention of existing physics publishers, and in rapid succession I received congenial visits from the editorial directors of both the American Physical Society (APS) and Institute of Physics Publishing (IOPP) to my little 10’x10’ office. It also had an immediate impact on physicists in less developed countries, who reported feeling finally in the loop, both for timely receipt of research ideas and for equitable reading of their own contributions. (Twenty years later, I still receive messages reporting that the system provides to them more assistance than any international organization.)

In the fall of 1992, a colleague at CERN emailed me: ‘Q: do you know the worldwide-web program?’ I did not, but quickly installed WorldWideWeb.app, serendipitously written by Tim Berners-Lee for the same NeXT computer that I was using, and with whom I began to exchange emails. Later that fall, I used it to help beta-test the first US Web server, set up by the library at the Stanford Linear Accelerator Center for use by the high-energy physics community.

Not everyone appreciated just how rapidly things were progressing. In early 1994, I happened to serve on a committee advising the APS about putting Physical Review Letters online. I suggested that a Web interface along the lines of the xxx.lanl.gov prototype might be a good way for the APS to disseminate its documents. A response came back from another committee member: “Installing and learning to use a WorldWideWeb browser is a complicated and difficult task — we can’t possibly expect this of the average physicist.”

13The most significant of these was maintained by Joanne Cohn, then a postdoctoral associate at the IAS Princeton, who manually collected and redistributed preprints (originally in the subject area of matrix models of two dimensional surfaces) to what became a list of over a hundred interested researchers, largely younger postdocs and grad students. This manual methodology provided an important proof of concept for the broader automated and archival system that succeeded it, and her distribution list was among those used to seed the initial hep-th@xxx.lanl.gov userbase.

18The name xxx was derived from the heuristic I’d used in marking text in TeX files for later correction (i.e., awaiting a final search for all appearances of the string ‘xxx’, which wouldn’t otherwise appear, and for which I later learned the string ‘tk’ is employed by journalists, for similar reasons).

19The csh scripts were translated to Perl starting in 1994, when NSF funding permitted actual employees.

(the rest)

Pro-DRM stories

Tuesday, October 22nd, 2013

Microsoft Thinks DRM Can Solve the Privacy Problem:

Under the model imagined by Mundie, applications and services that wanted to make use of sensitive data, such as a person’s genome sequence or current location, would have to register with authorities. A central authority would distribute encryption keys to applications, allowing them to access protected data in the ways approved by the data’s owners.

The use of cryptographic wrappers would ensure that an application or service couldn’t use the data in any other way. But the system would need to be underpinned by new regulations, said Mundie: “You want to say that there are substantial legal penalties for anyone that defies the rules in the metadata. I would make it a felony to subvert those mechanisms.”

If I understand correctly, this idea really is calling for DRM. Only difference is the use case: instead intending to restrict individual user’s control over their computing device in order to prevent them from doing certain things with some “content” on/accessed their device, Mundie wants applications (i.e., organizations) to be prevented from doing certain things with some “data” on/accessed via their computers.

Sounds great. Conceivably could even be well intentioned. But, just as “consumer” DRM abets monopoly and does not prevent copying, this data DRM would…do exactly the same thing.

Meanwhile, law enforcement, politicians, and media see devices locked down by a vendor, rather than controlled by users, as the solution to device theft (rendering the device relatively unsalable, and data inaccessible).

I want but don’t recall any anti-info-freedom (not how it’d self-describe anyway) speculative/science fiction/fantasy, dystopian, utopian, or otherwise. Above gives some hint about how to go about it: imagine a world in which DRM+criminal law works great, and tell stories about how various types of bad actors are thwarted by the combination. Or, where society falls apart because it hasn’t been implemented.

Another pro-IP story idea: the world faces some intractable problem that requires massive intellectual input, cannot coordinate to solve. Maybe a disease. Maybe in the form of alien invasion that can only be defeated by creating an alien disease. Or everyone is enslaved because all is known, and everyone knows that no privacy means no freedom. But someone has the bright idea to re-introduce or strengthen IP mechanisms which save the day.

One story I’d like to think wouldn’t work in even cardboard form is that nobody produces and promotes big budget cultural artifacts due to lack of IP or its enforcement, and as a result everyone is sad. The result is highly unlikely as people love whatever cultural works they’re surrounded by. But, maybe the idea could work as a discontinuity: suddenly there are no more premium video productions. People have grown up with such being the commanding heights of culture, and without this, they are sad. They have nothing to talk to friends about, and society breaks down. If this story were a film, people could appear smart by informing their friends that maybe the director really intended to question our dependence on premium video such as the film in question.

Flow ∨ incentive 2013 anthology winner

Thursday, August 29th, 2013
Anthology Future of Copyright 2.0 cover

The Futureof Copyright 2.0 contest has a winner, published with 8 other top entries. Read the anthology PDF/EPUB/MOBI ebook, listen to an audiobook version created by me reading each entry (for the purposes of judging) on sight aloud, or individual entries linked below in my review.

A Penny for Your Thoughts by Talllama is the winner, unanimously selected by the jury. It’s a fun transposition of exactly today’s copyright and debates (including wild mischaracterization) into a future with mind uploading. Quotes:

“My mom and dad would get upset at me.” He sent her a copy of his anxiety.
“Well my dad says copyright is stupid,” Helen said, sending back an emotion that was pitying yet vaguely contemptuous. “He says anyone who won’t pirate is a dummy.”
Timothy scowled at her. “My dad says that piracy is stealing.”
“My dad and I have trillions of books and thoughts, so we know better than you,” Helen said.

“You see, Timothy,” his father continued, “If people didn’t have an incentive to think or dream, they wouldn’t. And then no one would have any new thoughts. Everyone would stop thinking because there wouldn’t be any money in it.”
“But you said people had thoughts in 1920 even though there was no copyright.”
“Yes, you’re right. What I mean is that there were no professional thinkers in those days.”
“It would be bad if people stopped thinking,” Timothy said.

Lucy’s Irrevocable, Colossal, Terrible Mistake by Chris Sakkas tells a story in which releasing stuff under a free license has amazing results. Unfortunately free licenses aren’t magic, and it isn’t clear to me what the story says about the future of copyright. Quote:

An alternative bookshop in Sussex, on the other side of the world to Lucy, created a video ad with her favourite song as its backing track. The ad ended with a thanks to Lucy for releasing her music under a free, libre and open licence and a hyperlink. Hundreds more people visited her site, the passive consumers of big business! They used the donate button on her site to spray her with filthy lucre.

Perfect Memory by Jacinto Dávila describes a world of 2089 mediated by perfect memory of all non-intimate events and voting for assignment of credit; copyright plays what role in such future? Quote:

[Socio-mathematics] was also the source of an unprecedented and fundamental agreement. All the stakeholders of the world came, after many unfortunate and even bloody events, to negotiate a new framework for producing and sharing common knowledge. And the basis they found was that to preserve freedom, but also the health of the whole planet and its species, that knowledge had to be shared, easily and readily, among all the stakeholders.

That led to a rebuttal of so-called intellectual property and copyright laws and their replacement with a body of global law acknowledging our common heritage, codependent future and the fundamental right of knowledge everyone has.

Copyrights in Chopin’s future by Krzysztof Blachnicki (English translation by Wojciech Pędzich) has Chopin resurrected in 2015 through unspecified but expensive means, then exploited by and escaping from the current recording industry. A fun idea, but ultimately a stereotypical anti-recording-industry rant. Quote:

I hope that more people will have their own opinions instead of listening to the hissing of those snakes, sucking money out of artists to pay off their new automobiles. Wake up, folks, a good musician will earn his daily bread even if he decides to let his music go for free, for all to share. A poor man will be able to listen to real music, while a wealthy man will make the artist’s effort worthwhile. Isn’t it all about just that? Each may benefit, except the music companies which become redundant, so they turn to lies in order to keep themselves afloat.

What is an author? by refined quotes is a story in which all legal ideas are closely regulated and bland, “old art” outlawed so people consume new, legal stuff, the good stuff and real artists are underground, and with an additional twist that ideas take animal form. Quote:

You see? An artist is a little like an art producer. But he deals with the genuine ideas, as you see. He doesn’t buy them, like the law says he should. He just comes to places like this and spends his time with them. It’s a slow process. No one knows why precisely, but this crazy little ideas are in love with him, well, with all the artists.

The Ambiguous Future of Copyright by HOT TOCO is a snarky take on where copyright and computing are headed, presumably meaning to project ambiguous reception of Ubuntu/Canonical ten years into the future. Quote:

Friend2: “If I can extract info from this rant, I think Commonible, Ltd, is saying they’ve perfected trusted computing, fully protecting you from hacking and making ALL media available, fully compensating all value chains.”

Friend3 (quiet one): “I read about sth like this, Project Xanaxu. Real old stuff. The inventor thought the Web failed to transclude micropayments.”

500 Years of Copyright Law by Holovision embeds current copyright factoids in description of future eras. I can’t tell what its “Copynorm Exchange Decentralization Entente (CEDE)” regime consists of, but maybe that is also a current copyright factoid: someone reading a pamphlet describing copyright and mentioning a few acronyms (eg TRIPs) would not have much sense of the regime. Quote:

Attempts to put digital rights management into 3D printers were sooner or later unsuccessful against hardware hackers. There were open sourced 3D printers but many perceived them to be inferior to the commercially patented ones. When the commercial 3D printers were used to make other printers most companies left the marketplace. This left many still infringing the 3D printers with the excuse that the printers became “abandonware”.

Copyright Protest Song by Tom Konecki doesn’t seem to say anything about the future, but does capture various bits of complaint about the current regime. Quote:

Everybody wants only money and success
And none remembers the idea of open-access
To acquire knowledge and gather information
That is now the object of companies’ manipulation.

Copyright – Real Vision or fantastic vision? by Arkadiusz Janusz (English translation by Kuba Kwiatkowski) contains a proposal of the type “metadata and tracking will get everyone paid” explained in a parent-child lecture. Quote:

The file doesn’t contain a price, only points. In other words, the price is quoted in points. A point has a different monetary value for every country. Here, the minimum wage is about 1000 dollars. We divide the minimum wage by one thousand and receive the amount value of 1 point. If you download a movie, the server checks in which country you are, and converts the points into the appropriate price.

That’s why in our times, pirates are at on the verge of extinction. Most frequently, they’re maniacs or followers of some strange ideologies.

You can also read my review of last year’s future of copyright contest anthology, which links to each selection. This year’s selections are notably less dystopian and take less of a position on what the future of copyright ought be.

I enjoyed judging this year’s contest, and hope it and any future iterations achieve much greater visibility. Current copyright debates seem to me to have an incredibly short-term focus, which can’t be for the good when changes which have supposedly produced the current debate are only speeding up. Additionally, and my one complaint about the contest other than lack of fame, is that “copyright” is a deeply suboptimal frame for thinking about its, and our, future. I will try to address this point directly soon, but some of it can be read from my contest entry of last year (other forms of info regulation with different policy goals being much more pertinent than quibbling over the appropriateness of the word “copyright”).

You may see an embedded player for the audiobook version read by me below. Some of the durations shown may be incorrect; the winner, A Penny for Your Thoughts, is actually slightly less than 15 minutes long. Sadly the player obscures the browser context menu and doesn’t provide a way to increase playback rate, so first, a default HTML5 player loaded with only the winner:

Question Software Freedom Day‽

Saturday, September 15th, 2012

If software freedom is important, it must be attacked, lest it die from the unremitting bludgeoning of obscurity and triviality. While necessary, I don’t particularly mean trivial attacks on overblown cleverness, offensive advocates, terminological nitpicking, obscurantism, fragmentation, poor marketing, lack of success, lack of diversity, and more. Those are all welcome, but mostly (excepting the first, my own gratuitously obscure, nitpicking and probably offensive partial rant against subversive heroic one-wayism) need corrective action such as Software Freedom Day and particularly regarding the last, OpenHatch.

I mostly mean attacking the broad ethical, moral, political, and utilitarian assumptions, claims, and predictions of software freedom. This may mean starting with delineating such claims, which are very closely coupled, righteous expressions notwithstanding. So far, software freedom has been wholly ignored by ethicists, moral philosophers, political theorists and activists, economists and other social scientists. Software freedom people who happen to also be one of the aforementioned constitute a rounding error.

But you don’t have to be an academic, activist, software developer, or even a computer user to have some understanding of and begin to critique software freedom, any more than one needs to be an academic, activist, businessperson, or voter to have some understanding of and begin to critique the theory and practice of business, democracy, and other such institutional and other social arrangements.

Computation does and will ever moreso underlay and sometimes dominate our arrangements. Should freedom be a part of such arrangements? Does “software freedom” as roughly promoted by the rounding error above bear any relation to the freedom (and other desirables; perhaps start with equality and security) you want, or wish to express alignment with?

If you want to read, a place to start are the seminal Philosophy of the GNU Project essays, many ripe for beginning criticism (as are many classic texts; consider the handful of well known works of the handful of philosophers of popular repute; the failure of humanity to move on is deeply troubling).

If you want to listen and maybe watch, presentations this year from Jacob Appelbaum, Cory Doctorow (about, mp3), Eben Moglen (1, 2), and Karen Sandler (short, long).

Law of headlines ending in a question mark is self-refuting in multiple ways. The interrobang ending signifies an excited fallibility, if the headline can possibly be interpreted charitably given the insufferable preaching that follows, this sentence included.

Try some free software that is new to you today. You ought to have LibreOffice installed even if you rarely use it in order to import and export formats whatever else you may be using probably can’t. I finally got around to starting a MediaGoblin instance (not much to see yet).

If you’re into software freedom insiderism, listen to MediaGoblin lead developer Chris Webber on the most recent Free as in Freedom podcast. I did not roll my eyes, except at the tangential mention of my ranting on topics like the above in a previous episode.

Ride- and car-sharing and computers

Thursday, August 9th, 2012


Underemployed vehicles and land at Fruitvale BART parking lot, the 5th of 11 stations between me and Fremont.

Tuesday I attended Silicon Valley Automotive Open Source presentations on Car- and Ride-sharing. I heard of the group via its organizer, Alison Chaiken, who I noted in February gave the most important talk at LibrePlanet: Why Cars need Free Software.

The talks were non-technical, unlike I gather most previous SVAOS talks (this was the first event in Fremont, which is much more convenient for me than Santa Clara, where most previous talks have been held), but very interesting.

I did not realize how many car- and ride-sharing startups and other initiatives exist. Dozens (in Germany alone?) or hundreds of startups, and all manufacturers, rental companies, and other entities with fleets are at least thinking about planning something. That seems good on its own, and will provide good experience to take advantage of further more intensive/efficient use of vehicles to be enabled by robocars.

Carpooling and other forms of ride-sharing has gone up and down with fuel rationing and prices. Carsharing seems to go back to 1948 at least, but with slow growth, only recently becoming a somewhat mainstream product and practice. Ride- and car-sharing ought be complements. Sharing a taxi, shared vans, and even mass transit, could in some ways been seen as primitive examples of this complementarity.

Rationing is not in effect now, and real prices aren’t that high, so I imagine current activity must be mostly be a result of computers and communications making coordination more efficient. This is highlighted by the reliance and hope of startups and other initiatives on the web and mobile applications and in-car computers and communications for access, control, coordination, reputation, and tracking.

But none of this seems to be open source at the end-user service/product level. Certainly much or even most of it is built on open source components (web as usual, auto internals moving that way). These seem like important arenas to argue against security-through-obscurity in vehicles and their communications systems, and to demand auditability and public benefit for public systems in various senses (one of the startups suggested marketing their platform to municipal governments; if reputation systems are to eventually mediate day-to-day activities, they need scrutiny).

Libre Planet 2012

Tuesday, April 10th, 2012

2012-03-24%2009.44.38

A couple weeks ago I attended the Free Software Foundation’s annual conference, Libre Planet, held at UMass Boston a bit south of downtown. I enjoyed the event considerably, but can only give brief impressions of some of the sessions I saw.

John Sullivan, Matt Lee, Josh Gay started with a welcome and talk about some recent FSF campaigns. I think Sullivan said they exceeded their 2011 membership goal, which is great. Join. (But if I keep to my refutation schedule, I’m due to tell you why you shouldn’t join in less than 5 years.)

Rubén Rodríguez spoke about Trisquel, a distribution that removes non-free software and recommendations from Ubuntu (lagging those releases by about 5 months) and makes other changes its developers consider user-friendly, such as running GNOME 3 in fallback mode and some Web (an IceWeasel-like de-branded Firefox) privacy settings. I also saw a lightning talk from someone associated with ThinkPenguin, which sells computers pre-loaded with Trisquel.

Asheesh Laroia spoke about running events that attract and retain newcomers. You can read about OpenHatch (the organization he runs) events or see a more specific presentation he recently gave at PyCon with Jessica McKellar. The main point of humor in the talk concerned not telling potential developers to download a custom built VM to work with your software: it will take a long time, and often not work.

Joel Izlar’s talk was titled Digital Justice: How Technology and Free Software Can Build Communities and Help Close the Digital Divide about his work with Free IT Athens.

Alison Chaiken gave the most important talk of the conference, Why Cars need Free Software. I was impressed by how many manufacturers are using at least some free software in vehicles and distressed by the state of automotive security and proprietary vendors pitching security through obscurity. Like Appelbaum and Sandler, get Chaiken in front of as many people as possible.

Brett Smith gave an update on the FSF GPL compliance Lab, including mentioning MPL 2.0 and potential CC-BY-SA 4.0 compatibility with GPLv3 (both of which I’ve blogged about before), but the most interesting part of the talk concerned his participation in Trans-Pacific Partnership Stakeholder Forums; it sounded like software freedom concerns got a more welcome reception than expected.

ginger coons spoke about Libre Graphics Magazine, a graphic arts magazine produced entirely with free software. I subscribed.

Deb Nicholson gave a great, funny presentation on Community Organizing for Free Software Activists. If the topic weren’t free software, Nicholson could make a lot of money as a motivational speaker.

Evan Prodromou spoke on the Decentralized Social Web, using slides the same or very similar to his SXSW deck, which is well worth flipping through.

Eben Moglen’s talk was titled Free Software’s Future Amidst the Commercial Open Source Wars: How to Turn the Patent Disaster and Compliance Issues to Our Advantage, but I think I missed the how to part. Moglen also talked for awhile about IRS scrutiny of free software organization 501(c)(3) applications, vaguely hinting at a potential need to “re-evaluate how our infrastructure is organized” (paraphrase). I’ll have more to say about that, but in another post.

Chris Webber and I spoke about Creative Commons 4.0 licenses and free software/free culture cooperation. You can view our picture-only slides (odp; pdf; slideshare) but a recent interview with me and post about recent developments in MediaGoblin (Webber’s project) would be more informative and cover similar ground. We also pre-announced an exciting project that Webber will spam the world about tomorrow and sort of reciprocated for an award FSF granted Creative Commons three years ago — the GNU project won the Free Software Project for the Advancement of Free Culture Social Benefit Award 0, including the amount of 100BTC, which John Sullivan said would be used for the aforementioned exciting project.

Yukihiro ‘matz’ Matsumoto spoke on how Emacs changed his life, including introducing him to programming, free software, and influencing the design of Ruby.

Matthew Garrett spoke on Preserving user freedoms in the 21st century. Perhaps the most memorable observation he made concerned how much user modification of software occurs without adequate freedom (making the modifications painful), citing CyanogenMod.

I mostly missed the final presentations in order to catch up with people I wouldn’t have been able to otherwise, but note that Matsumoto won the annual Advancement of Free Software award, and GNU Health the Free Software Award for Projects of Social Benefit. Happy hacking!

Wincing at surveillance, the security state, medical devices, and free software

Friday, January 27th, 2012

Last week I saw a play version of . I winced throughout, perhaps due to over-familiarity with the topics and locale, and there are just so many ways a story with its characteristics (heavy handed politics that I agree with, written for adolescents, set in near future) can embarrass me. Had there been any room for the nuance of apathy, a few bars of Saturday Night Holocaust would’ve been great to work into the play. But the acting and other stuff making up the play seemed well done, I’m glad that people are trying to make art about issues that I care about, and I’d recommend seeing the play (extended to Feb 25 in San Francisco) for anyone less sensitive.

If you don’t feel like seeing a play in San Francisco, I recommend Jacob Appelbaum’s talk on surveillance, the security state, and free software at linux.conf.au 2012. It contains everything important Little Brother does and more, and isn’t fiction:

I also just watched Karen Sandler’s LCA talk, which I can’t recommend highly enough. It is more expansive than a short talk she gave last year at OSCON based on her paper Killed by Code: Software Transparency in Implantable Medical Devices.

I frequently complain that free/libre/open software and nearby aren’t taken seriously as being important to a free and otherwise good society and that advocates have completely failed to demonstrate this importance. Well, much more is needed, but the above talks give me hope, and getting Appelbaum and Sandler in front of as many people as possible would be great progress.

Years of open hardware licenses

Tuesday, January 10th, 2012

Last in a list of the top 10 free/open source software legal developments in 2011 (emphasis added):

Open Hardware License. The open hardware movement received a boost when CERN published an Open Hardware License (“CERN OHL”). The CERN OHL is drafted as a documentation license which is careful to distinguish between documentation and software (which is not licensed under the CERN OHL) http://www.ohwr.org/documents/88. The license is “copyleft” and, thus, similar to GPLv2 because it requires that all modifications be made available under the terms of the CERN OHL. However, the license to patents, particularly important for hardware products, is ambiguous. This license is likely to the first of a number of open hardware licenses, but, hopefully, the open hardware movement will keep the number low and avoid “license proliferation” which has been such a problem for open source software.

But the CERN OHL isn’t the first “open hardware license”. Or perhaps it is the nth first. Several free software inspired licenses intended specifically for design and documentation have been created over the last decade or so. I recall encountering one dating back to the mid-1990s, but can’t find a reference now. Discussion of open hardware licenses was hot at the turn of the millennium, though most open hardware projects from that time didn’t get far, and I can’t find a license that made it to “1.0″.

People have been wanting to do for hardware what the GNU General Public License has done for software and trying to define open hardware since that timeframe. They keep on wanting (2006) and trying (2007, 2011 comments).

Probably the first arguably “high quality” license drafted specifically for open hardware is the (2007). The CERN OHL might be the second such. There has never been consensus on the best license to use for open hardware. Perhaps this is why CERN saw fit to create yet another (incompatible copyleft at that — incompatible with TAPR OHL, GPL, and BY-SA), but there still isn’t consensus in 2012.

Licenses primarily used for software (usually [L]GPL, occasionally BSD, MIT, or Apache) have also been used for open hardware since at least the late 1990s — and much more so than any license created specifically for open hardware. CC-BY-SA has been used by Arduino since at least 2008 and since 2009.

In 2009 the primary drafter of the TAPR OHL published a paper with a rationale for the license. By my reading of the paper, the case for a license specific to hardware seems pretty thin — hardware design and documentation files, and distribution of printed circuit boards seem a lot like program source and executables, and mostly subject to copyright. It also isn’t clear to me why the things TAPR OHL handles differently than most open source software licenses (disclaims strictly being a copyright license, instead wanting to serve as a clickwrap contract; attempts to describe requirements functionally, instead of legally, to avoid describing explicitly the legal regime underlying requirements; limited patent grant applies to “possessors” not just contributors) might not be interesting for software licenses, if they are interesting at all, nor why features generally rejected for open source software licenses shouldn’t also be rejected for open hardware (email notification to upstream licensors; a noncommercial-only option — thankfully deprecated late last year).

Richard Stallman’s 1999 note about free hardware seems more clear and compelling than the TAPR paper, but I wish I could read it again without knowing the author. Stallman wrote:

What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.

In a thread from 2007 about yet another proposed open hardware license, three people who generally really know what they’re talking about each wondered why a hardware-specific license is needed: Brian Behlendorf, Chris DiBona, and Simon Phipps. The proposer withdrew and decided to use the MIT license (a popular non-copyleft license for software) for their project.

My bias, as with any project, would be to use a GPL-compatible license. But my bias may be inordinately strong, and I’m not starting a hardware project.

One could plausibly argue that there are still zero quality open hardware specific licenses, as the upstream notification requirement is arguably non-open, and the CERN OHL also contains an upstream notification requirement. Will history repeat?

Addendum: I just noticed the existence of an open hardware legal mailing list, probably a good venue to follow if you’re truly interested in these issues. The organizer is Bruce Perens, who is involved with TAPR and is convinced non-copyright mechanisms are absolutely necessary for open hardware. His attempt to bring rigor to the field and his decades of experience with free and open source software are to be much appreciated in any case.