Archive for April, 2012

Future of Copyright

Monday, April 30th, 2012

“Copyright” (henceforth, copyrestriction) is merely a current manifestation of humanity’s malgovernance of information, of commons, of information commons (the combination being the most pertinent here). Copyrestriction was born of royal censorship and monopoly grants. It has acquired an immense retinue of administrators, advocates, bureaucrats, goons, publicists, scholars, and more. Its details have changed and especially proliferated. But its concept and impact are intact: grab whatever revenue and control you can, given your power, and call your grabbing a “right” and necessary for progress. As a policy, copyrestriction is far from unique in exhibiting these qualities. It is only particularly interesting because it, or more broadly, information governance, is getting more important as everything becomes information intensive, increasingly via computation suffusing everything. Before returning to the present and future, note that copyrestriction is also not temporally unique among information policies. Restriction of information for the purposes of control and revenue has probably existed since the dawn of agriculture, if not longer, e.g., cults and guilds.

Copyrestriciton is not at all a right to copy a work, but a right to persecute others who distribute, perform, etc, a work. Although it is often said that a work is protected by copyrestriction, this is strictly not true. A work is protected through the existence of lots of copies and lots of curators. The same is true for information about a work, i.e., metadata, e.g., provenance. Copyrestriction is an attack on the safety of a work. Instead, copyrestriction protects the revenue and control of whoever holds copyrestriction on a work. In some cases, some elements of control remain with a work’s immediate author, even if they no longer hold copyrestriction: so-called moral rights.

Copyrestriction has become inexorably more restrictive. Technology has made it increasingly difficult for copyrestriction holders and their agents to actually restrict others’ copying and related activity. Neither trend has to give. Neither abolition nor police state in service of copyrestriction scenarios are likely in the near future. Nor is the strength of copyrestricition the only dimension to consider.

Free and open source software has demonstrated the ethical and practical value of the opposite of copyrestriction, which is not its absence, but regulation mandating the sharing of copies, specifically in forms suitable for inspection and improvement. This regulation most famously occurs in the form of source-requiring copyleft, e.g., the GNU General Public License (GPL), which allows copyrestriction holders to use copyrestriction to force others to share works based on GPL’d works in their preferred form for modification, e.g., source code for software. However, this regulation occurs through other means as well, e.g., communities and projects refusing to curate and distribute works not available in source form, funders mandating source release, and consumers refusing to buy works not available in source form. Pro-sharing regulation (using the term “regulation” maximally broadly to include government, market, and others; some will disbelieve in the efficacy or ethics of one or more, but realistically a mix will occur) could become part of many policies. If it does not, society will be put at great risk by relying in security through obscurity, and lose many opportunities to scrutinize, learn about, and improve society’s digital infrastructure and the computing devices individuals rely on to live their lives, and to live, period.

Information sharing, and regulation promoting and protecting the same, also ought play a large role in the future of science. Science, as well as required information disclosure in many contexts, long precedes free and open source software. The last has only put a finer point on pro-sharing regulation in relation to copyrestriction, since the most relevant works (mainly software) are directly subject to both. But the extent to which pro-sharing regulation becomes a prominent feature of information governance, and more narrowly, the extent to which people have software freedom, will depend mostly on the competitive success of projects that reveal or mandate revelation of source, the success of pro-sharing advocates in making the case that pro-sharing regulation is socially desirable, and their success in getting pro-sharing regulation enacted and enforced (again, whether in customer and funding agreements, government regulation, community constitutions, or other) much more so than copyrestriction-based enforcement of the GPL and similar. But it is possible that the GPL is setting an important precedent for pro-sharing regulation, even though the pro-sharing outcome is conceptually orthogonal to copyrestriction.

Returning to copyrestriction itself, if neither abolition nor totalism are imminent, will humanity muddle through? How? What might be done to reduce the harm of copyrestriction? This requires a brief review of the forces that have resulted in the current muddle, and whether we should expect any to change significantly, or foresee any new forces that will significantly impact copyrestriction.

Technology (itself, not the industry as an iterest group) is often assumed to be making copyrestriction enforcement harder and driving demands for for harsher restrictions. In detail, that’s certainly true, but for centuries copyrestriciton has been resilient to technical changes that make copying ever easier. Copying will continue to get easier. In particular the “all culture on a thumb drive” (for some very limited definition of “all”) approaches, or is here if you only care about a few hundred feature length films, or are willing to use portable hard drive and only care about a few thousand films (or much larger numbers of books and songs). But steadily more efficient copying isn’t going to destroy copyrestriction sector revenue. More efficient copying may be necessary to maintain current levels of unauthorized sharing, given steady improvement in authorized availability of content industry controlled works, and little effort to make unauthorized sharing easy and worthwhile for most people (thanks largely to suppression of anyone who tries, and media management not being an easy problem). Also, most collection from businesses and other organizations has not and will probably not become much more difficult due to easier copying.

National governments are the most powerful entities in this list, and the biggest wildcards. Although most of the time they act roughly as administrators or follow the cue of more powerful national governments, copyrestriction laws and enforcement are ultimately in their courts. As industries that could gain from copyrestriction grow in developing nations, those national governments could take on leadership of increasing restriction and enforcement, and with less concern for civil liberties, could have few barriers. At the same time, some developing nations could decide they’ve had enough of copyrestriction’s inequality promotion. Wealthy national governments could react to these developments in any number of ways. Trade wars seem very plausible, actual war prompted by a copyrestriction or related dispute not unimaginable. Nations have fought stupid wars over many perceived economic threats.

The traditional copyrestriction industry is tiny relative to the global economy, and even the U.S. economy, but its concentration and cachet make it a very powerful lobbyist. It will grab all of the revenue and control it possibly can, and it isn’t fading away. As alluded to above, it could become much more powerful in currently developing nations. Generational change within the content industry should lead to companies in that industry better serving customers in a digital environment, including conceivably attenuating persecution of fans. But it is hard to see any internal change resulting in support for positive legal changes.

Artists have always served as exhibit one for the content industry, and have mostly served as willing exhibitions. This has been highly effective, and every category genuflects to the need for artists to be paid, and generally assumes that copyrestriction is mandatory to achieve this. Artists could cause problems for copyrestriction-based businesses and other organizations by demanding better treatment under the current system, but that would only effect the details of copyrestriction. Artists could significantly help reform if more were convinced of the goodness of reform and usefulness of speaking up. Neither seems very likely.

Other businesses, web companies most recently, oppose copyrestriction directions that would negatively impact their businesses in the short term. Their goal is not fundamental reform, but continuing whatever their current business is, preferably with increasing profits. Just the same as content industries. A fundamental feature of muddling through will be tests of various industries and companies to carve out and protect exceptions. And exploit copyrestriction whenever it suits them.

Administrators, ranging from lawyers to WIPO, though they work constantly to improve or exploit copyrestriciton, will not be the source of significant change.

Free and open source software and other constructed commons have already disrupted a number of categories, including server software and encyclopedias. This is highly significant for the future of copyrestriction, and more broadly, information governance, and a wildcard. Successful commons demonstrate feasibility and desirability of policy other than copyrestriction, help create a constituency for reducing copyrestriction and increasing pro-sharing policies, and diminish the constituency for copyrestriction by reducing the revenues and cultural centrality of restricted works and their controlling entities. How many additional sectors will opt-in freedom disrupt? How much and for how long will the cultural centrality of existing restricted works retard policy changes flowing from such disruptions?

Cultural change will affect the future of copyrestriction, but probably in detail only. As with technology change, copyrestriction has been incredibly resilient to tremendous cultural change over the last centuries.

Copyrestriction reformers (which includes people who would merely prevent additional restrictions, abolitionists, and those between and beyond, with a huge range of motivations and strategies among them) will certainly affect the future of copyrestriction. Will they only mitigate dystopian scenarios, or cause positive change? So far they have mostly failed, as the political economy of diffuse versus concentrated interests would predict. Whether reformers succeed going forward will depend on how central and compelling they can make their socio-political cause, and thus swell their numbers and change society’s narrative around information governance — a wildcard.

Scholars contribute powerfully to society’s narrative over the long term, and constitute a separate wildcard. Much scholarship has moved from a property- and rights-based frame to a public policy frame, but this shift as yet is very shallow, and will remain so until a property- and rights-basis assumption is cut out from under today’s public policy veneer, and social scientists rather than lawyers dominate the conversation. This has occurred before. Over a century ago economists were deeply engaged in similar policy debates (mostly regarding patents, mostly contra). Battles were lost, and tragically economists lost interest, leaving the last century’s policy to be dominated by grabbers exploiting a narrative of rights, property, and intuitive theory about incentives as cover, with little exploration and explanation of public welfare to pierce that cover.

Each of the above determinants of the future of copyrestriction largely hinge on changing (beginning with engaging, in many cases) people’s minds, with partial exceptions for disruptive constructed commons and largely exogenous technology and culture change (partial as how these develop will be affected by copyrestriction policy and debate to some extent). Even those who cannot be expected to effect more than details as a class are worth engaging — much social welfare will be determined by details, under the safe assumption that society will muddle through rather than make fundamental changes.

I don’t know how to change or engage anyone’s mind, but close with considerations for those who might want to try:

  • Make copyrestriction’s effect on wealth, income, and power inequality, across and within geographies, a central part of the debate.
  • Investigate assumptions of beneficent origins of copyrestriction.
  • Tolerate no infringement of intellectual freedom, nor that of any civil liberty, for the sake of copyrestriction.
  • Do not assume optimality means “balance” nor that copyrestriction maximalism and public domain maximalism are the poles.
  • Make pro-sharing, pro-transparency, pro-competition and anti-monopoly policies orthogonal to above dimension part of the debate.
  • Investigate and celebrate the long-term policy impact of constructed commons such as free and open source software.
  • Take into account market size, oversupply, network effects, non-pecuniary motivations, and the harmful effects of pecuniary motivations on creative work, when considering supply and quality of works.
  • Do not grant that copyrestriction-based revenues are or have ever been the primary means of supporting creative work.
  • Do not grant big budget movies as failsafe argument for copyrestriction; wonderful films will be produced without, and even if not, we will love whatever cultural forms exist and should be ashamed to accept any reduction of freedom for want of spectacle.
  • Words are interesting and important but trivial next to substance. Replace all occurrences of “copyrestriction” with “copyright” as you see fit. There is no illusion concerning our referent.

This work takes part in the and is published under the CC BY-SA 3.0 license.

dsc02482.jpg

Intellectual Protectionism’s regressive double taxation of the real economy

Sunday, April 29th, 2012

How Apple Sidesteps Billions in Taxes:

Almost every major corporation tries to minimize its taxes, of course. For Apple, the savings are especially alluring because the company’s profits are so high. Wall Street analysts predict Apple could earn up to $45.6 billion in its current fiscal year — which would be a record for any American business.

For anyone slightly concerned about inequality, this record ought to raise another red flag concerning the effect of copyright and patent monopolies. (Similarly, review a list of the wealthiest individuals.)

Apple serves as a window on how technology giants have taken advantage of tax codes written for an industrial age and ill suited to today’s digital economy. Some profits at companies like Apple, Google, Amazon, Hewlett-Packard and Microsoft derive not from physical goods but from royalties on intellectual property, like the patents on software that makes devices work. Other times, the products themselves are digital, like downloaded songs. It is much easier for businesses with royalties and digital products to move profits to low-tax countries than it is, say, for grocery stores or automakers. A downloaded application, unlike a car, can be sold from anywhere.

The growing digital economy presents a conundrum for lawmakers overseeing corporate taxation: although technology is now one of the nation’s largest and most valued industries, many tech companies are among the least taxed, according to government and corporate data. Over the last two years, the 71 technology companies in the Standard & Poor’s 500-stock index — including Apple, Google, Yahoo and Dell — reported paying worldwide cash taxes at a rate that, on average, was a third less than other S.& P. companies’. (Cash taxes may include payments for multiple years.)

First tax: monopoly pricing. Second tax: burden shifted to entities less able to move profits. Remove monopolies for much good, then resume debate about all aspects of taxation per usual, as you wish.

Caveats:

  • Real economy usually refers to non-financial sector. Suggestions welcome for non-IP sector.
  • I may be double counting: without copyright and patent, “real” economy share of profits would increase, tax burden concomitantly.
  • Not all profits that are easy to move result from copyright and patent, e.g., I suspect a small proportion of Google’s profits are even indirectly resulting from such.
  • There are more non-IP than IP-related entities on record wealth and profit lists, in particular natural resource entities. I don’t claim IP is the dominant source of inequality — but surely an increasing one — and more easily mitigated than natural resource entities, or for that matter, dictators and other state entities, which I wish were included on rich lists.

Ban* human drivers somewhere by 2020

Saturday, April 28th, 2012

Read Brad Templeton’s latest post on self-driving cars, which has a number of updates. They’re coming fast, but how fast we drastically reduce transportation deaths, give people back a huge amount of time, reduce stress, and greatly reduce space and other resources dedicated to transportation, and how secure new systems are, is undetermined. Of course there are many reasons to be skeptical — the transition will probably be much slower and more problematic than needed, but in a few decades will still seem a major triumph. But I don’t want the hidden trillions of dollars, hours, lives, carbon emissions, malfunctions, etc. that could be saved sooner to be wasted.

Regarding security, malfunctions, etc., we need to demand use of proven secure protocols and source open to inspection, i.e, not play security through obscurity. Regarding space, planning for urbanity remade (largely, recovered) through autonomous vehicles needs to be the top urban planning priority.

The benefits will be so great that we should also think about how to speed adoption — the only disheartening news in Templeton’s post concerns a survey in which only 20% of car buyers would pay an additional $3,000 for a fully (if I understand correctly) self-driving car. How little respondents value their own time and lives, let alone others’! It’s time to start agitating for road owners to ban human drivers. Most road owners are governments, but not all — consider as an issue of public policy or consumer demand as you wish.

Won’t banning human drivers disadvantage poor people who can’t afford a self-driving car? Possibly very briefly, but on net I expect self-driving cars to have an egalitarian effect — they’ll make owning a vehicle at all unnecessary (a rental can be summoned on demand), reduce housing costs (of which parking is a big part), and allow the recovery of areas walled off and drowned out by highways.

Let’s ban human drivers from at least some roads by 2020. I suggest starting with San Pablo Avenue in Oakland, Emeryville, and Berkeley — because I live close to it! Admittedly a downtown area or certain lanes of a highway might be an easier start.

*In theory it is usually preferable to increase prices rather than ban altogether. In this case, obvious mechanisms would include drastically increasing driver license fees and tolls for vehicles with human drivers. In practice, a ban may be more feasible.

BayHac

Wednesday, April 25th, 2012

I attended BayHac over the weekend. There were a bunch of interesting impromptu talks. Notes on all those I recall follow, with other observations at the end.

  • The first talk encouraged people to get up, and demonstrated some hand stretches. Although almost everyone knows sitting hunched up all day is harmful, almost everyone needs an occasional reminder. A mention at any conference is well worthwhile for the individuals and community in question.
  • Plush is a POSIX shell server (in Haskell) with a web UI (Javascript; communication between them with JSON, session initiated with an unguessable URL), which already provides some nice context and control over display not available in a usual table, e.g., the output of each command is collapsible, pieces of the current path are clickable, and there are tooltips for each command argument.
  • You currently have to register (no verification) to see anything, but GitStar is a GitHub clone built on Hails, a framework for hosting mutually untrusted web applications (eg project wiki and source browser in case of GitStar), at least with respect to access to each others’ data, which is controlled via “Labeled IO”, with labels specifying policy around data based on Information Flow Control, a subject I had not heard of. GitStar and Hails source is mirrored on GitHub. An initial research paper and promise of more at the bottom of a README.
  • Visi is a language implemented in Haskell that seems somewhere between a spreadsheet and a traditional programming language read-eval-print-loop (ad hoc, immediate recalculation, but no grid). Spreadsheet programming is something I know almost nothing about, and ought to.
  • Composable Pipes. For readers who care about such things, note author dissuaded from using GPL in linked thread.
  • Something about typesafe reuse of types extending Agda’s typesystem. I understood very little (my fault).
  • cabal branch will checkout source for any Haskell package with source repository annotations — source of the specific version you’re using, if annotation specifies source-repository this.
  • A talk about Lift, a Scala web framework, mostly concerning the benefits of passing around a DOM representation rather than treating templates as blobs of text. I’m impressed by Lift, and played a bit with it a couple years ago, but was in no place to spend time to develop any real application.
  • Implementations of Paxos and parallel builds.
  • Interacting with DBUS (eg GNOME and KDE applications) from Haskell.
  • Shelly, a library for shell scripting in Haskell. Side point made that scripting languages, including Ruby, find initial popularity through scripting by sysadmins, not developer frameworks — true to my experience.
  • Visualizing n-gram relationships with SVG output.
  • Translating simple art pieces in Forth to C.
  • Pingwell is creating apps to bring pricing and other information to consumers when they can act on it, eg in a grocery store. I’m pretty sure this scenario has been imagined thousands of times over the past few decades, good that it will come to exist soon. The talk was mostly about using a Haskell computer vision library.

Other observations:

  • Macbooks in majority, but lower proportion than usual — and many, perhaps a majority, of people with Macbooks seemed to be developing on Linux in a virtual machine.
  • 100% male attendees, which is a bit disturbing, but I detected zero brogrammer vibe.
  • The first day was hosted at Hacker Dojo, which I had heard of but never visited. I was surprised at how large and quiet it was. At least during the day, it seems dozens of people use as a coworking space.
  • Web application development, Yesod in particular, is attracting more people to Haskell (I can’t find a reference, but recall that #haskell and/or /r/haskell watchers increased substantially on the day Yesod 1.0 was released). Newbie attendees (me included) leaning Haskell and Yesod further evidence.
  • Lots of anguish and anguished cries about dependency hell.

Thanks to BayHac organizer Mark Lentczner (also Plush developer and haskell-patform release manager; watch his intro to Haskell video) for putting together such a well run and friendly event. I felt some trepidation about attending, knowing that almost everyone would be both smarter and more experienced than me, but everyone was helpful and patient. I’m glad I went.

unset GREP_OPTIONS (alias grep instead) to get an “acceptable egrep”

Saturday, April 21st, 2012

Building some software from source, I recently encountered

checking for egrep... configure: error: no acceptable egrep could be found in /usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/xpg4/binL

and couldn’t find a solution anywhere. In the shower I remembered setting GREP_OPTIONS in my environment. That seems to have been the problem. After unsetting GREP_OPTIONS and obtaining the same default behavior for myself with

alias grep='grep --color=auto --perl-regexp'

the error went away. I guess configure is finding and running /bin/grep, which is affected by the environment, but bypasses any aliases.

Libre Planet 2012

Tuesday, April 10th, 2012

2012-03-24%2009.44.38

A couple weeks ago I attended the Free Software Foundation’s annual conference, Libre Planet, held at UMass Boston a bit south of downtown. I enjoyed the event considerably, but can only give brief impressions of some of the sessions I saw.

John Sullivan, Matt Lee, Josh Gay started with a welcome and talk about some recent FSF campaigns. I think Sullivan said they exceeded their 2011 membership goal, which is great. Join. (But if I keep to my refutation schedule, I’m due to tell you why you shouldn’t join in less than 5 years.)

Rubén Rodríguez spoke about Trisquel, a distribution that removes non-free software and recommendations from Ubuntu (lagging those releases by about 5 months) and makes other changes its developers consider user-friendly, such as running GNOME 3 in fallback mode and some Web (an IceWeasel-like de-branded Firefox) privacy settings. I also saw a lightning talk from someone associated with ThinkPenguin, which sells computers pre-loaded with Trisquel.

Asheesh Laroia spoke about running events that attract and retain newcomers. You can read about OpenHatch (the organization he runs) events or see a more specific presentation he recently gave at PyCon with Jessica McKellar. The main point of humor in the talk concerned not telling potential developers to download a custom built VM to work with your software: it will take a long time, and often not work.

Joel Izlar’s talk was titled Digital Justice: How Technology and Free Software Can Build Communities and Help Close the Digital Divide about his work with Free IT Athens.

Alison Chaiken gave the most important talk of the conference, Why Cars need Free Software. I was impressed by how many manufacturers are using at least some free software in vehicles and distressed by the state of automotive security and proprietary vendors pitching security through obscurity. Like Appelbaum and Sandler, get Chaiken in front of as many people as possible.

Brett Smith gave an update on the FSF GPL compliance Lab, including mentioning MPL 2.0 and potential CC-BY-SA 4.0 compatibility with GPLv3 (both of which I’ve blogged about before), but the most interesting part of the talk concerned his participation in Trans-Pacific Partnership Stakeholder Forums; it sounded like software freedom concerns got a more welcome reception than expected.

ginger coons spoke about Libre Graphics Magazine, a graphic arts magazine produced entirely with free software. I subscribed.

Deb Nicholson gave a great, funny presentation on Community Organizing for Free Software Activists. If the topic weren’t free software, Nicholson could make a lot of money as a motivational speaker.

Evan Prodromou spoke on the Decentralized Social Web, using slides the same or very similar to his SXSW deck, which is well worth flipping through.

Eben Moglen’s talk was titled Free Software’s Future Amidst the Commercial Open Source Wars: How to Turn the Patent Disaster and Compliance Issues to Our Advantage, but I think I missed the how to part. Moglen also talked for awhile about IRS scrutiny of free software organization 501(c)(3) applications, vaguely hinting at a potential need to “re-evaluate how our infrastructure is organized” (paraphrase). I’ll have more to say about that, but in another post.

Chris Webber and I spoke about Creative Commons 4.0 licenses and free software/free culture cooperation. You can view our picture-only slides (odp; pdf; slideshare) but a recent interview with me and post about recent developments in MediaGoblin (Webber’s project) would be more informative and cover similar ground. We also pre-announced an exciting project that Webber will spam the world about tomorrow and sort of reciprocated for an award FSF granted Creative Commons three years ago — the GNU project won the Free Software Project for the Advancement of Free Culture Social Benefit Award 0, including the amount of 100BTC, which John Sullivan said would be used for the aforementioned exciting project.

Yukihiro ‘matz’ Matsumoto spoke on how Emacs changed his life, including introducing him to programming, free software, and influencing the design of Ruby.

Matthew Garrett spoke on Preserving user freedoms in the 21st century. Perhaps the most memorable observation he made concerned how much user modification of software occurs without adequate freedom (making the modifications painful), citing CyanogenMod.

I mostly missed the final presentations in order to catch up with people I wouldn’t have been able to otherwise, but note that Matsumoto won the annual Advancement of Free Software award, and GNU Health the Free Software Award for Projects of Social Benefit. Happy hacking!

Announcing RichClowd: crowdfunding with a $tatus check

Sunday, April 1st, 2012

RichClowd

Oakland, California, USA — 2012 April 1

Today, RichClowd pre-announces the launch of RichClowd.com, an exclusive “crowdfunding” service for the wealthy. Mass crowdfunding sites like Kickstarter have demonstrated a business model, but are held back by the high transaction costs of small funds and non-audacious projects proposed by under-capitalized creators. RichClowd will be open exclusively to funders and creators with already substantial access to capital.

The wealthy can fund and create audacious projects without joining together, but mass crowdfunding points to creative, marketing, networking, and status benefits to joint funding. So far mass crowdfunding has improved the marketplace for small projects and trinkets. The wealthy constitute a different strata of the marketplace — in the clouds, relatively — and RichClowd exists to improve the marketplace for monuments, public and personal, and other monumental projects.

“Through exclusivity RichClowd will enable projects with higher class, bigger vision, and that ultimately long-lasting contributions to society”, said RichClowd founder Mike Linksvayer, who continued: “Throughout human history great people have amassed and created the infrastructure, artifacts and knowledge that survives and is celebrated. As the Medicis were to the renaissance, RichClowders will be to the next stage of global society.”

RichClowd will initially have a membership fee of $100,000, which may be applied to project funding pledges. To ensure well-capitalized projects, RichClowd will implement a system called Dominant Assurance Contracts, which align the interests of funders and creators via a refund above the pledged amount for unsuccessful projects. This system will require creators to deposit the potential additional refund amount prior to launching a RichClowd project.

For the intellectual products of RichClowd projects, use of a forthcoming RichClowd Club License (RCCL) will be encouraged, making outputs maximally useful to funders, while maintaining exclusivity. Egalitarian projects will have the option of using a free public license.

The technology powering RichClowd.com will be developed openly and available under an AGPL open source badgeware intellectual property license. “RichClowd believes in public works. In addition to the many that will be created via the RichClowd service, open development of the RichClowd.com technology is the company’s own direct contribution to the extraordinary public work that is the Internet”, said Linksvayer.

About RichClowd

RichClowd is a pre-launch exclusive crowdfunding service with a mission of increasing the efficiency of bringing together great wealth and great projects to make an amazing world. Based in Oakland, California, a city with a reputation for poverty and agitation, RichClowd additionally takes on the local civic duty of pointing out Oakland’s incredible wealth and wealthy residents: to begin with, look up at the hills.

Contact

Mike Linksvayer, Founder
biginfo@richclowd.com

2004 April Fools

Sunday, April 1st, 2012

Comment on a previous refutation post from Phil Barker:

I was going to ask when you would start refuting your refutations, but I see you’ve already started :D

If I haven’t stopped before then, I imagine that refuting the idea of refuting old ideas would be a good place to kill the project. I have noticed a slight increase in desire to refute whatever I’m communicating, post beginning this series.

Another from Jon Phillips:

Omg, I need to do a post like this or probably better is to kill more bad projects. I have successfully killed many, a skill I learner well from you Mike. Its healthy!

Surely more local value may be obtained by killing bad projects, but consider sharing refutations of previous ideas and projects as akin to publishing negative results: a social responsibility rarely followed through on, such that I’m confident that at this juncture, even weak efforts are worthwhile.

Only two foolish posts from 2004 April:

The other $1 business model refers to $1/track music stores and says that $1 stores are a bigger business than the recorded music industry. Apart from gross use of the term business model (“pricing strategy” would’ve been much better) the point that the recorded music industry is smaller than yet another sector of the economy is hardly insightful. The recorded music industry is high status, near the commanding heights, while dollar stores are at best low status. Even if we were to generously include Wal*Mart in the class of dollar, that is very low cost, stores, such that the class is an undeniably major part of the economy, all of cheap retail is merely where people go to purchase the products of the recorded music industry, and to listen to piped-in-music, courtesy of the recorded music industry.

alias grep=’glark’ advocates using glark, an enhanced grep (a command line tool for matching and displaying text in files against a match pattern) written in Ruby. But there’s no reason to install yet another slow scripting language that you don’t want (unless you already depend on it, but even then you might not in the future). Instead:

export GREP_OPTIONS='--color=auto --perl-regexp'

Addendum 20120421: Use an alias in place of setting the environment variable:

alias grep='grep --color=auto --perl-regexp'