The web is pretty great, much better than easily imagined alternatives. Three broad categories it could improve in:
Universality. All humans should be able to access the web, and this should be taken to include being able to publish, collaborate, do business, and run software on the web, in any manner, in any language or other interface. Presently, billions aren’t on the net at all, activity outside of a handful of large services is very expensive (in money, expertise, or marketing), and machine translation and accessibility are very limited.
Security. All of the above, securely, without having to understand anything technical about security, and with lots of technical and cultural guards against technical and non-technical attacks of all kinds.
Resilience. All of the above, with minimal interruption and maximal recovery from disaster, from individual to planetary scale.
Three pet outcomes I wish for:
Collective wisdom. The web helps make better decisions, at all scales.
Commons dominance. Most top sites are free-as-in-freedom. Presently, only Wikipedia (#5) is.
25 August 1991 – Finnish student, Linus Torvalds, announced the start of Linux
23 August 1991 – World Wide Web released publicly
14 August 1991 – Launch of arXiv
Moody titled the slide with above items “greatest week in history?” — arXiv is listed as 19 August, which I think must be a transcription error. Still, perhaps the greatest month in some assessment which grants something like knowledge commons supreme importance; perhaps future conventional wisdom. Those three are a nice mix of software, protocols, literature, data, and infrastructure.
The world’s tallest broadcast tower collapsed 8 August 1991 to make way for somewhat less centralized communications.
arXiv isn’t nearly as well known to the general public as Linux, which isn’t nearly as well known as the Web. In some ways arXiv is still ahead of its time. The future takes a long time to be distributed — Moody’s cover slide is titled “half a revolution”. Below I’ve excepted a few particularly enjoyable paragraphs and footnotes from It was twenty years ago today… by arXiv founder Paul Ginsparg (who, Moody notes, knew of GNU via a brother). I’ve bolded a couple phrases and added one link for additional entertainment value. The whole 9 page paper (PDF) is worth a quick read (I can’t help but notice and enjoy the complete absence of two words: “copyright” and “license”).
The exchange of completed manuscripts to personal contacts directly by email became more widespread, and ultimately led to distribution via larger email lists.13 The latter had the potential to correct a significant problem of unequal access in the existing paper-preprint distribution system. For purely practical reasons, authors at the time used to mail photocopies of their newly minted articles to only a small number of people. Those lower in the food chain relied on the beneficence of those on the A-list, and aspiring researchers at non-elite institutions were frequently out of the privileged loop entirely. This was a problematic situation, because, in principle, researchers prefer that their progress depends on working harder or on having some key insight, rather than on privileged access to essential materials.
By the spring of 1991, I had moved to the Los Alamos National Laboratory, and for the first time had my own computer on my desk, a 25 MHz NeXTstation with a 105 Mb hard drive and 16 Mb of RAM. I was thus fully cognizant of the available disk and CPU resources, both substantially larger than on a shared mainframe, where users were typically allocated as little as the equivalent of 0.5 Mb for personal use. At the Aspen Center for Physics, in Colorado, in late June 1991, a stray comment from a physicist, concerned about emailed articles overrunning his disk allocation while traveling, suggested to me the creation of a centralized automated repository and alerting system, which would send full texts only on demand. That solution would also democratize the exchange of information, leveling the aforementioned research playing field, both internally within institutions and globally for all with network access.
Thus was born xxx.lanl.gov,18 initially an automated email server (and within a few months also an FTP server), powered by a set of csh scripts.19 It was originally intended for about 100 submissions per year from a small subfield of high-energy particle physics, but rapidly grew in users and scope, receiving 400 submissions in its first half year. The submissions were initially planned to be deleted after three months, by which time the pre-existing paper distribution system would catch up, but by popular demand nothing was ever deleted. (Renamed in late 1998 to arXiv.org, it has accumulated roughly 700,000 total submissions [mid Aug 2011], currently receives 75,000 new submissions per year, and serves roughly one million full text downloads to about 400,000 distinct users per week. The system quickly attracted the attention of existing physics publishers, and in rapid succession I received congenial visits from the editorial directors of both the American Physical Society (APS) and Institute of Physics Publishing (IOPP) to my little 10’x10’ office. It also had an immediate impact on physicists in less developed countries, who reported feeling finally in the loop, both for timely receipt of research ideas and for equitable reading of their own contributions. (Twenty years later, I still receive messages reporting that the system provides to them more assistance than any international organization.)
In the fall of 1992, a colleague at CERN emailed me: ‘Q: do you know the worldwide-web program?’ I did not, but quickly installed WorldWideWeb.app, serendipitously written by Tim Berners-Lee for the same NeXT computer that I was using, and with whom I began to exchange emails. Later that fall, I used it to help beta-test the first US Web server, set up by the library at the Stanford Linear Accelerator Center for use by the high-energy physics community.
Not everyone appreciated just how rapidly things were progressing. In early 1994, I happened to serve on a committee advising the APS about putting Physical Review Letters online. I suggested that a Web interface along the lines of the xxx.lanl.gov prototype might be a good way for the APS to disseminate its documents. A response came back from another committee member: “Installing and learning to use a WorldWideWeb browser is a complicated and difficult task — we can’t possibly expect this of the average physicist.”
13The most significant of these was maintained by Joanne Cohn, then a postdoctoral associate at the IAS Princeton, who manually collected and redistributed preprints (originally in the subject area of matrix models of two dimensional surfaces) to what became a list of over a hundred interested researchers, largely younger postdocs and grad students. This manual methodology provided an important proof of concept for the broader automated and archival system that succeeded it, and her distribution list was among those used to seed the initial firstname.lastname@example.org userbase.
18The name xxx was derived from the heuristic I’d used in marking text in TeX files for later correction (i.e., awaiting a final search for all appearances of the string ‘xxx’, which wouldn’t otherwise appear, and for which I later learned the string ‘tk’ is employed by journalists, for similar reasons).
19The csh scripts were translated to Perl starting in 1994, when NSF funding permitted actual employees.
Under the model imagined by Mundie, applications and services that wanted to make use of sensitive data, such as a person’s genome sequence or current location, would have to register with authorities. A central authority would distribute encryption keys to applications, allowing them to access protected data in the ways approved by the data’s owners.
The use of cryptographic wrappers would ensure that an application or service couldn’t use the data in any other way. But the system would need to be underpinned by new regulations, said Mundie: “You want to say that there are substantial legal penalties for anyone that defies the rules in the metadata. I would make it a felony to subvert those mechanisms.”
If I understand correctly, this idea really is calling for DRM. Only difference is the use case: instead intending to restrict individual user’s control over their computing device in order to prevent them from doing certain things with some “content” on/accessed their device, Mundie wants applications (i.e., organizations) to be prevented from doing certain things with some “data” on/accessed via their computers.
Sounds great. Conceivably could even be well intentioned. But, just as “consumer” DRM abets monopoly and does not prevent copying, this data DRM would…do exactly the same thing.
Meanwhile, law enforcement, politicians, and media see devices locked down by a vendor, rather than controlled by users, as the solution to device theft (rendering the device relatively unsalable, and data inaccessible).
I want but don’t recall any anti-info-freedom (not how it’d self-describe anyway) speculative/science fiction/fantasy, dystopian, utopian, or otherwise. Above gives some hint about how to go about it: imagine a world in which DRM+criminal law works great, and tell stories about how various types of bad actors are thwarted by the combination. Or, where society falls apart because it hasn’t been implemented.
Another pro-IP story idea: the world faces some intractable problem that requires massive intellectual input, cannot coordinate to solve. Maybe a disease. Maybe in the form of alien invasion that can only be defeated by creating an alien disease. Or everyone is enslaved because all is known, and everyone knows that no privacy means no freedom. But someone has the bright idea to re-introduce or strengthen IP mechanisms which save the day.
One story I’d like to think wouldn’t work in even cardboard form is that nobody produces and promotes big budget cultural artifacts due to lack of IP or its enforcement, and as a result everyone is sad. The result is highly unlikely as people love whatever cultural works they’re surrounded by. But, maybe the idea could work as a discontinuity: suddenly there are no more premium video productions. People have grown up with such being the commanding heights of culture, and without this, they are sad. They have nothing to talk to friends about, and society breaks down. If this story were a film, people could appear smart by informing their friends that maybe the director really intended to question our dependence on premium video such as the film in question.
A Penny for Your Thoughts by Talllama is the winner, unanimously selected by the jury. It’s a fun transposition of exactly today’s copyright and debates (including wild mischaracterization) into a future with mind uploading. Quotes:
“My mom and dad would get upset at me.” He sent her a copy of his anxiety.
“Well my dad says copyright is stupid,” Helen said, sending back an emotion that was pitying yet vaguely contemptuous. “He says anyone who won’t pirate is a dummy.”
Timothy scowled at her. “My dad says that piracy is stealing.”
“My dad and I have trillions of books and thoughts, so we know better than you,” Helen said.
“You see, Timothy,” his father continued, “If people didn’t have an incentive to think or dream, they wouldn’t. And then no one would have any new thoughts. Everyone would stop thinking because there wouldn’t be any money in it.”
“But you said people had thoughts in 1920 even though there was no copyright.”
“Yes, you’re right. What I mean is that there were no professional thinkers in those days.”
“It would be bad if people stopped thinking,” Timothy said.
Lucy’s Irrevocable, Colossal, Terrible Mistake by Chris Sakkas tells a story in which releasing stuff under a free license has amazing results. Unfortunately free licenses aren’t magic, and it isn’t clear to me what the story says about the future of copyright. Quote:
An alternative bookshop in Sussex, on the other side of the world to Lucy, created a video ad with her favourite song as its backing track. The ad ended with a thanks to Lucy for releasing her music under a free, libre and open licence and a hyperlink. Hundreds more people visited her site, the passive consumers of big business! They used the donate button on her site to spray her with filthy lucre.
Perfect Memory by Jacinto Dávila describes a world of 2089 mediated by perfect memory of all non-intimate events and voting for assignment of credit; copyright plays what role in such future? Quote:
[Socio-mathematics] was also the source of an unprecedented and fundamental agreement. All the stakeholders of the world came, after many unfortunate and even bloody events, to negotiate a new framework for producing and sharing common knowledge. And the basis they found was that to preserve freedom, but also the health of the whole planet and its species, that knowledge had to be shared, easily and readily, among all the stakeholders.
That led to a rebuttal of so-called intellectual property and copyright laws and their replacement with a body of global law acknowledging our common heritage, codependent future and the fundamental right of knowledge everyone has.
Copyrights in Chopin’s future by Krzysztof Blachnicki (English translation by Wojciech Pędzich) has Chopin resurrected in 2015 through unspecified but expensive means, then exploited by and escaping from the current recording industry. A fun idea, but ultimately a stereotypical anti-recording-industry rant. Quote:
I hope that more people will have their own opinions instead of listening to the hissing of those snakes, sucking money out of artists to pay off their new automobiles. Wake up, folks, a good musician will earn his daily bread even if he decides to let his music go for free, for all to share. A poor man will be able to listen to real music, while a wealthy man will make the artist’s effort worthwhile. Isn’t it all about just that? Each may benefit, except the music companies which become redundant, so they turn to lies in order to keep themselves afloat.
What is an author? by refined quotes is a story in which all legal ideas are closely regulated and bland, “old art” outlawed so people consume new, legal stuff, the good stuff and real artists are underground, and with an additional twist that ideas take animal form. Quote:
You see? An artist is a little like an art producer. But he deals with the genuine ideas, as you see. He doesn’t buy them, like the law says he should. He just comes to places like this and spends his time with them. It’s a slow process. No one knows why precisely, but this crazy little ideas are in love with him, well, with all the artists.
The Ambiguous Future of Copyright by HOT TOCO is a snarky take on where copyright and computing are headed, presumably meaning to project ambiguous reception of Ubuntu/Canonical ten years into the future. Quote:
Friend2: “If I can extract info from this rant, I think Commonible, Ltd, is saying they’ve perfected trusted computing, fully protecting you from hacking and making ALL media available, fully compensating all value chains.”
Friend3 (quiet one): “I read about sth like this, Project Xanaxu. Real old stuff. The inventor thought the Web failed to transclude micropayments.”
500 Years of Copyright Law by Holovision embeds current copyright factoids in description of future eras. I can’t tell what its “Copynorm Exchange Decentralization Entente (CEDE)” regime consists of, but maybe that is also a current copyright factoid: someone reading a pamphlet describing copyright and mentioning a few acronyms (eg TRIPs) would not have much sense of the regime. Quote:
Attempts to put digital rights management into 3D printers were sooner or later unsuccessful against hardware hackers. There were open sourced 3D printers but many perceived them to be inferior to the commercially patented ones. When the commercial 3D printers were used to make other printers most companies left the marketplace. This left many still infringing the 3D printers with the excuse that the printers became “abandonware”.
Copyright Protest Song by Tom Konecki doesn’t seem to say anything about the future, but does capture various bits of complaint about the current regime. Quote:
Everybody wants only money and success
And none remembers the idea of open-access
To acquire knowledge and gather information
That is now the object of companies’ manipulation.
Copyright – Real Vision or fantastic vision? by Arkadiusz Janusz (English translation by Kuba Kwiatkowski) contains a proposal of the type “metadata and tracking will get everyone paid” explained in a parent-child lecture. Quote:
The file doesn’t contain a price, only points. In other words, the price is quoted in points. A point has a different monetary value for every country. Here, the minimum wage is about 1000 dollars. We divide the minimum wage by one thousand and receive the amount value of 1 point. If you download a movie, the server checks in which country you are, and converts the points into the appropriate price.
That’s why in our times, pirates are at on the verge of extinction. Most frequently, they’re maniacs or followers of some strange ideologies.
You can also read my review of last year’s future of copyright contest anthology, which links to each selection. This year’s selections are notably less dystopian and take less of a position on what the future of copyright ought be.
I enjoyed judging this year’s contest, and hope it and any future iterations achieve much greater visibility. Current copyright debates seem to me to have an incredibly short-term focus, which can’t be for the good when changes which have supposedly produced the current debate are only speeding up. Additionally, and my one complaint about the contest other than lack of fame, is that “copyright” is a deeply suboptimal frame for thinking about its, and our, future. I will try to address this point directly soon, but some of it can be read from my contest entry of last year (other forms of info regulation with different policy goals being much more pertinent than quibbling over the appropriateness of the word “copyright”).
If software freedom is important, it must be attacked, lest it die from the unremitting bludgeoning of obscurity and triviality. While necessary, I don’t particularly mean trivial attacks on overblown cleverness, offensive advocates, terminological nitpicking, obscurantism, fragmentation, poor marketing, lack of success, lack of diversity, and more. Those are all welcome, but mostly (excepting the first, my own gratuitously obscure, nitpicking and probably offensive partial rant against subversive heroic one-wayism) need corrective action such as Software Freedom Day and particularly regarding the last, OpenHatch.
I mostly mean attacking the broad ethical, moral, political, and utilitarian assumptions, claims, and predictions of software freedom. This may mean starting with delineating such claims, which are very closely coupled, righteous expressions notwithstanding. So far, software freedom has been wholly ignored by ethicists, moral philosophers, political theorists and activists, economists and other social scientists. Software freedom people who happen to also be one of the aforementioned constitute a rounding error.
But you don’t have to be an academic, activist, software developer, or even a computer user to have some understanding of and begin to critique software freedom, any more than one needs to be an academic, activist, businessperson, or voter to have some understanding of and begin to critique the theory and practice of business, democracy, and other such institutional and other social arrangements.
Computation does and will ever moreso underlay and sometimes dominate our arrangements. Should freedom be a part of such arrangements? Does “software freedom” as roughly promoted by the rounding error above bear any relation to the freedom (and other desirables; perhaps start with equality and security) you want, or wish to express alignment with?
If you want to read, a place to start are the seminal Philosophy of the GNU Project essays, many ripe for beginning criticism (as are many classic texts; consider the handful of well known works of the handful of philosophers of popular repute; the failure of humanity to move on is deeply troubling).
Law of headlines ending in a question mark is self-refuting in multiple ways. The interrobang ending signifies an excited fallibility, if the headline can possibly be interpreted charitably given the insufferable preaching that follows, this sentence included.
Try some free software that is new to you today. You ought to have LibreOffice installed even if you rarely use it in order to import and export formats whatever else you may be using probably can’t. I finally got around to starting a MediaGoblin instance (not much to see yet).
The talks were non-technical, unlike I gather most previous SVAOS talks (this was the first event in Fremont, which is much more convenient for me than Santa Clara, where most previous talks have been held), but very interesting.
I did not realize how many car- and ride-sharing startups and other initiatives exist. Dozens (in Germany alone?) or hundreds of startups, and all manufacturers, rental companies, and other entities with fleets are at least thinking about planning something. That seems good on its own, and will provide good experience to take advantage of further more intensive/efficient use of vehicles to be enabled by robocars.
Carpooling and other forms of ride-sharing has gone up and down with fuel rationing and prices. Carsharing seems to go back to 1948 at least, but with slow growth, only recently becoming a somewhat mainstream product and practice. Ride- and car-sharing ought be complements. Sharing a taxi, shared vans, and even mass transit, could in some ways been seen as primitive examples of this complementarity.
Rationing is not in effect now, and real prices aren’t that high, so I imagine current activity must be mostly be a result of computers and communications making coordination more efficient. This is highlighted by the reliance and hope of startups and other initiatives on the web and mobile applications and in-car computers and communications for access, control, coordination, reputation, and tracking.
But none of this seems to be open source at the end-user service/product level. Certainly much or even most of it is built on open source components (web as usual, auto internals moving that way). These seem like important arenas to argue against security-through-obscurity in vehicles and their communications systems, and to demand auditability and public benefit for public systems in various senses (one of the startups suggested marketing their platform to municipal governments; if reputation systems are to eventually mediate day-to-day activities, they need scrutiny).
A couple weeks ago I attended the Free Software Foundation’s annual conference, Libre Planet, held at UMass Boston a bit south of downtown. I enjoyed the event considerably, but can only give brief impressions of some of the sessions I saw.
John Sullivan, Matt Lee, Josh Gay started with a welcome and talk about some recent FSF campaigns. I think Sullivan said they exceeded their 2011 membership goal, which is great. Join. (But if I keep to my refutation schedule, I’m due to tell you why you shouldn’t join in less than 5 years.)
Rubén Rodríguez spoke about Trisquel, a distribution that removes non-free software and recommendations from Ubuntu (lagging those releases by about 5 months) and makes other changes its developers consider user-friendly, such as running GNOME 3 in fallback mode and some Web (an IceWeasel-like de-branded Firefox) privacy settings. I also saw a lightning talk from someone associated with ThinkPenguin, which sells computers pre-loaded with Trisquel.
Asheesh Laroia spoke about running events that attract and retain newcomers. You can read about OpenHatch (the organization he runs) events or see a more specific presentation he recently gave at PyCon with Jessica McKellar. The main point of humor in the talk concerned not telling potential developers to download a custom built VM to work with your software: it will take a long time, and often not work.
Joel Izlar’s talk was titled Digital Justice: How Technology and Free Software Can Build Communities and Help Close the Digital Divide about his work with Free IT Athens.
Alison Chaiken gave the most important talk of the conference, Why Cars need Free Software. I was impressed by how many manufacturers are using at least some free software in vehicles and distressed by the state of automotive security and proprietary vendors pitching security through obscurity. Like Appelbaum and Sandler, get Chaiken in front of as many people as possible.
Brett Smith gave an update on the FSF GPL compliance Lab, including mentioning MPL 2.0 and potential CC-BY-SA 4.0 compatibility with GPLv3 (both of which I’ve blogged about before), but the most interesting part of the talk concerned his participation in Trans-Pacific Partnership Stakeholder Forums; it sounded like software freedom concerns got a more welcome reception than expected.
ginger coons spoke about Libre Graphics Magazine, a graphic arts magazine produced entirely with free software. I subscribed.
Deb Nicholson gave a great, funny presentation on Community Organizing for Free Software Activists. If the topic weren’t free software, Nicholson could make a lot of money as a motivational speaker.
Evan Prodromou spoke on the Decentralized Social Web, using slides the same or very similar to his SXSW deck, which is well worth flipping through.
Eben Moglen’s talk was titled Free Software’s Future Amidst the Commercial Open Source Wars: How to Turn the Patent Disaster and Compliance Issues to Our Advantage, but I think I missed the how to part. Moglen also talked for awhile about IRS scrutiny of free software organization 501(c)(3) applications, vaguely hinting at a potential need to “re-evaluate how our infrastructure is organized” (paraphrase). I’ll have more to say about that, but in another post.
Yukihiro ‘matz’ Matsumoto spoke on how Emacs changed his life, including introducing him to programming, free software, and influencing the design of Ruby.
Matthew Garrett spoke on Preserving user freedoms in the 21st century. Perhaps the most memorable observation he made concerned how much user modification of software occurs without adequate freedom (making the modifications painful), citing CyanogenMod.
I mostly missed the final presentations in order to catch up with people I wouldn’t have been able to otherwise, but note that Matsumoto won the annual Advancement of Free Software award, and GNU Health the Free Software Award for Projects of Social Benefit. Happy hacking!
Last week I saw a play version of Little Brother. I winced throughout, perhaps due to over-familiarity with the topics and locale, and there are just so many ways a story with its characteristics (heavy handed politics that I agree with, written for adolescents, set in near future) can embarrass me. Had there been any room for the nuance of apathy, a few bars of Saturday Night Holocaust would’ve been great to work into the play. But the acting and other stuff making up the play seemed well done, I’m glad that people are trying to make art about issues that I care about, and I’d recommend seeing the play (extended to Feb 25 in San Francisco) for anyone less sensitive.
If you don’t feel like seeing a play in San Francisco, I recommend Jacob Appelbaum’s talk on surveillance, the security state, and free software at linux.conf.au 2012. It contains everything important Little Brother does and more, and isn’t fiction:
I frequently complain that free/libre/open software and nearby aren’t taken seriously as being important to a free and otherwise good society and that advocates have completely failed to demonstrate this importance. Well, much more is needed, but the above talks give me hope, and getting Appelbaum and Sandler in front of as many people as possible would be great progress.
Open Hardware License. The open hardware movement received a boost when CERN published an Open Hardware License (“CERN OHL”). The CERN OHL is drafted as a documentation license which is careful to distinguish between documentation and software (which is not licensed under the CERN OHL) http://www.ohwr.org/documents/88. The license is “copyleft” and, thus, similar to GPLv2 because it requires that all modifications be made available under the terms of the CERN OHL. However, the license to patents, particularly important for hardware products, is ambiguous. This license is likely to the first of a number of open hardware licenses, but, hopefully, the open hardware movement will keep the number low and avoid “license proliferation” which has been such a problem for open source software.
But the CERN OHL isn’t the first “open hardware license”. Or perhaps it is the nth first. Several free software inspired licenses intended specifically for open hardware design and documentation have been created over the last decade or so. I recall encountering one dating back to the mid-1990s, but can’t find a reference now. Discussion of open hardware licenses was hot at the turn of the millennium, though most open hardware projects from that time didn’t get far, and I can’t find a license that made it to “1.0″.
Probably the first arguably “high quality” license drafted specifically for open hardware is the TAPR Open Hardware License (2007). The CERN OHL might be the second such. There has never been consensus on the best license to use for open hardware. Perhaps this is why CERN saw fit to create yet another (incompatible copyleft at that — incompatible with TAPR OHL, GPL, and BY-SA), but there still isn’t consensus in 2012.
Licenses primarily used for software (usually [L]GPL, occasionally BSD, MIT, or Apache) have also been used for open hardware since at least the late 1990s — and much more so than any license created specifically for open hardware. CC-BY-SA has been used by Arduino since at least 2008 and Qi since 2009.
In 2009 the primary drafter of the TAPR OHL published a paper with a rationale for the license. By my reading of the paper, the case for a license specific to hardware seems pretty thin — hardware design and documentation files, and distribution of printed circuit boards seem a lot like program source and executables, and mostly subject to copyright. It also isn’t clear to me why the things TAPR OHL handles differently than most open source software licenses (disclaims strictly being a copyright license, instead wanting to serve as a clickwrap contract; attempts to describe requirements functionally, instead of legally, to avoid describing explicitly the legal regime underlying requirements; limited patent grant applies to “possessors” not just contributors) might not be interesting for software licenses, if they are interesting at all, nor why features generally rejected for open source software licenses shouldn’t also be rejected for open hardware (email notification to upstream licensors; a noncommercial-only option — thankfully deprecated late last year).
Richard Stallman’s 1999 note about free hardware seems more clear and compelling than the TAPR paper, but I wish I could read it again without knowing the author. Stallman wrote:
What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.
In a thread from 2007 about yet another proposed open hardware license, three people who generally really know what they’re talking about each wondered why a hardware-specific license is needed: Brian Behlendorf, Chris DiBona, and Simon Phipps. The proposer withdrew and decided to use the MIT license (a popular non-copyleft license for software) for their project.
My bias, as with any project, would be to use a GPL-compatible license. But my bias may be inordinately strong, and I’m not starting a hardware project.
One could plausibly argue that there are still zero quality open hardware specific licenses, as the upstream notification requirement is arguably non-open, and the CERN OHL also contains an upstream notification requirement. Will history repeat?
Addendum: I just noticed the existence of an open hardware legal mailing list, probably a good venue to follow if you’re truly interested in these issues. The organizer is Bruce Perens, who is involved with TAPR and is convinced non-copyright mechanisms are absolutely necessary for open hardware. His attempt to bring rigor to the field and his decades of experience with free and open source software are to be much appreciated in any case.
Since September 26 I’ve been exclusively using Firefox Nightly builds. I noticed an annoying bug a few days ago. It was gone the next day. It occurred to me that I hadn’t noticed any other bugs. For months prior, I had used Firefox Aurora (roughly alpha) and don’t recall any bugs.
Since October 15 I’ve been using Debian Testing on my main computer. No problems.
For years prior, I had been using Ubuntu, and upgrading shortly after they released an alpha of their next six-month release. Years ago, such upgrades would always break something. I upgraded an older computer to the just released Ubuntu 12.04 alpha. Nothing broke.
In recent memory, final releases desktop software would often crash. Now, there are as many “issues” as ever, but they seem to be desired enhancements, not bugs. The only buggy application I can recall running on my own computer in the last year is PiTiVi, but that is just immature.
Firefox and Debian (and the many applications packaged with Debian) probably aren’t unique. I hope most people have the relatively bug-free existence that I do.
Has desktop software actually gotten more stable over the last 5-10 years? Has anyone quantified this? If there’s anything to it, what are the causes? Implementation of continuous integration testing? Application stagnation (nothing left to do but fix bugs — doubt it!)? A mysterious Flynn Effect for software? Or perhaps I’m unadventurous or delusional?