Perry Metzger recently started a blog with a misleading title. If the first few days are any indication, his blogging will be as prolific as his posting to Usenet and mailing lists in legendary times. I recommend every one of his posts, and wish I had that much capacity.
Post Economics
Perry Metzger’s Undiminished Capacity
Monday, July 26th, 2004Bill Gates for Broken Windows
Sunday, July 11th, 2004Slashdot is running a story today headlined Gates: Open Source Kills Jobs, riffing on a Gates speech given in Malaysia. Asia Computer Weekly has this quote from the speech:
If you don’t want to create jobs or intellectual property, then there is a tendency to develop open source. It is not something you do as a day job. If you want to give it away, you work on it at night.
Does Gates have a reasonable point? No. He’s retelling the parable of the broken windows (how appropos!), also known as the broken window fallacy.
In a nutshell, the fallacy says that breaking windows is good for the economy, as it creates the need for replacements, and thus “creates jobs.” This is of course nuts. At the end of the replacement process, we’re worse off by having consumed whatever resources it takes to produce a window and we can’t use those resources for whatever we would’ve used them for had the window remained intact. Presumably spending resources on windows isn’t our first choice, so we’re also worse off by whatever the “utility” difference between our first choice and windows.
Bill Gates is essentially making the same fallacious argument — if we didn’t have open source software we’d be better off, because we’d have to pay Microsoft to develop equivalents, and they’d hire people. That’s no different from saying we’d be better off with broken windows, because someone would get work creating replacements. If Gates’ fallacious argument is true, let’s destroy open source, and why not all software written in the past ten years. That’ll create a lot of jobs for programmers, right? (Actually, no it won’t.) Windows 3.1 wasn’t that bad. Let’s do it for the jobs!
One reason people sometimes buy the broken window fallacy is that they confuse the purpose of economic activity, which is to fulfill needs, i.e., to create wealth, not to create work. Software is wealth, and open source software is wealth available to anyone, to use, build upon, and learn from. If open source does put some Microsofties out of work, fine, we’d be better off with them doing something else anyway.
The other $1 business model
Friday, April 30th, 2004Facts about dollar stores #2: The dollar store sector accounts for $16 billion a year, larger than the recorded music industry.
Voluntary Collective Licensing
Wednesday, February 25th, 2004The EFF has released a white paper outlining a proposed solution to the file sharing wars. It may strike one as compulsory licensing lite, but that perhaps is unfair, as everything is voluntary in the proposal. Still, the system would have to deal with versions of the problems with compulsory licensing (not an exhaustive list).
The one thing that really irritates me about this proposal (and it irritates me every time I hear it, which is often: example) is the mantra that “artists and copyright holders deserve to be fairly compensated.” Yeah, whatever. For some highly variable and contentious definition of “fair”.
Derek Slater provides links here.
Discussion at InfoAnarchy.
Googlebot Prime
Friday, February 6th, 2004Google co-founder Larry Page gave a talk called “Stanford and Google and the World, Oh My!” today. A few tidbits I didn’t know:
Most of the company goes on a ski trip to Tahoe every year. People ask when that tradition will end, as Google is no longer a small company (Page emphasizes that Google is medium-sized, not large). Page pointed out that the relative cost of sending most employees on the company outing is the same whether you have seven employees and leave one to mind the servers (as was the case on the first trip) or some larger numbers in similar proportions. The company culture doesn’t have to change (the purpose of the talk, followed by an info-session I didn’t stay for, was to recruit Stanford students). Ok, that tidbit was relatively boring, but easy to drop into conversation with most any crowd.
In order to hire more engineers, Google is adding engineering campuses in Manhattan, Switzerland, and India. Why those locations? They’re places where people already working for Google want to move. (Page quipped that India is someplace people want to move back to, unlike most places. I doubt the latter. I recall reportage of the phenomenon of people returning to Hong Kong and Ireland. People will go anywhere opportunity is to be found. If it is “home”, so much the better.) Immigration policies also made a difference: apparently it is easier for a spouse of a sponsored worker to work legally in Switzerland than in most places, and many good people are being kicked out of the U.S., a phenomenon Page decried. I strongly concur. By the way, 1) if there’s a race to the bottom for knowledge workers, Google is crazy for opening new offices in two of the most expensive places in the world, 2) on the way home I heard two Indian women talking about their immigration bureaucracy travails, and 3) Apartheid sucks.
One of Page’s slides was a picture of HAL’s front plate. AI is the goal of every computer scientist he says, excepting the scared ones. If Google can accurately answer any arbitrary query, you have AI. Google has many AI projects, some of them highly speculative (no details given). Eliezer Yudkowsky has often written that Google is the source of all truth and the like, but now he may be frightened, as I doubt Google engineers buy his friendly AI imperative.
Will Spiritual Robots Replace Humanity By 2100?
Monday, April 3rd, 2000Note: I posted this to the extropians mailing list April 2000, to this blog six years later. Recordings of the symposium are online at TechNetCast.
…
Ken Clements wrote:
> Hofstadter admitted that he had stacked the panel by not asking anyone
> from the anti-technology movement (Bill made up that whole side).
Hofstadter didn’t invite anyone who believes that intelligence requires a biological brain, which is quite different from believing that technology is bad. Joy seems to believe some technology is bad, but he doesn’t seem to fall into the “intelligence requires biology” camp. (Offtopic aside: Searle sounds like a very reasonable classical liberal in a recent interview with Reason magazine. Just more proof, not that any was needed, that even reasonable people often take dumb arguments seriously.)
There were really two debates going on (though the atmosphere wasn’t contentious at all): rapid vs. slow development/evolution of human-level or greater machine “intelligence” (in quotes because what this means is nebulous and wasn’t discussed) (primarily Kurzweil and Moravec vs. Holland and Koza respectively) and “we must relinquish dangerous technology now or face catastrophy” (Joy, with support from Koza, vs. Merkle and Moravec, with support from Kurzweil).
Kurzweil and Moravec’s initial talks were quite boring, though their contributions to the discussion and Q&A periods were the most insightful of the group. After droning about exponential and double exponential increases in computational speed, Kurzweil did sneak in one gem: he indicated that Moore’s law, or something like it, also applies to software, of course very much contrary to most people’s intuitions. I was very eager to hear a rationale for this claim. Unfortunately when Holland asked about it at one point, Kurzweil only mentioned better development tools.
Joy seemed quite proud (in a very serious way) that the media is paying attention to him and that he is well read (or at least can scour books for emotional quotes supporting his argument, or at least pay someone to do so). His argument basically boiled down to this: supervirulent pathogens will be easily engineered and/or produced in crazy and/or sick people’s basements, and if only a few of the millions of certifiably crazy, evil people in the world do this, we’re all doomed. We must not allow the democratization of KMDs (Knowledge of Mass Destruction?). Oh yeah, and remember how bad the plague was in Greek times or the middle ages? Why, they catapulted plague-infected bodies over city walls, and people died horrible deaths and doctors couldn’t help at all. Clearly we have not evolved to the point where people can be trusted with knowledge of biology sufficient to engineer pathogens. And oh yeah, there are a bunch of famous people and books that agree with Joy, and he can quote them all (I think Einstein was probably most quoted).
Joy’s solution is “relinquishment”, though he didn’t really give any details of what this would involve, though he seems to think that arms control treaties and subsequent verification protocols point in the right direction. He also mentioned, once, strict corporate liability as a deterrent to corporations developing dangerous technologies. I got a tiny chuckle out of that, as strict liability is one of those libertarian catchall answers.
I believe Joy said that he thinks there is a 30-50% chance of human extinction (presumably with no posthuman successor), not including all the other horrible outcomes that are likely. I didn’t get the impression from the other panelists (I should have asked that question), not to mention reading this mailing list, that human extinction isn’t a real possibility. I’d say that many of his concerns are valid, though his scaremonger/authoritarian approach seems contrived to create fame for himself.
If Joy was “wrong” and annoying, Merkle was “right” and extremely annoying. I felt that Merkle came across as a (highly intelligent) pompous ass with a really bad sense of humor. He didn’t even attempt to address Joy’s points, not counting wisecracks (“Would those nanomachines be using the broadcast architecture, or some other architecture?” Ok, you had to be there. I cringed.) I got the same impression of Merkle when I saw him on stage with Michio Kaku at a “Next 20 Years” event. My tentative evaluation: brilliant researcher, rotten public spokesperson.
I hadn’t heard of the broadcast architecture before (I don’t attempt to keep current with nanotech research, though hardly anyone in the audience raised their hands when Merkle asked if anyone had heard of it, and I suspect many of them were imaginging some networking or distributed computing architecture, as I was when I considered half-raising my hand). The idea seems to be that nanobots would somehow be broadcast instructions, eliminating the need for them to act completely independently (an analogy with DNA was made — these broadcast architecture nanobots wouldn’t carry around a full complement of DNA) and making them much cheaper and more controllable. The last point was held forth as a promising means of preventing a runaway self-replicator catastrophe.
My intuition (and that’s all I have on this point) doesn’t find this one-sentence version of the broadcast architecture very compelling in terms of cost or danger. Embedding instructions in a nanobot seems really cheap, considering the capacity of nanotech storage. Would an embedded communications device be cheaper? Well, it may be in one sense at least: it would be much easier to program nanobots to do some very limited function and await instructions than it would be to program nanobots to do generalized tasks and to handle general contingencies. But then it would be even simpler (not to mention safer) to program nanobots to do one task, then “die” after doing that task a desired number of times. On controllability, it seems that if nanobots can be broadcast instructions, then they, having security bugs, can be broadcast bad instructions.
John Holland’s comments were all very brief and generally well spoken. He was highly skeptical of Kurzweil and Moravec’s predictions. Holland said that we have a very slight understanding of intelligence, and without much better theory we won’t get very far. He drew an analogy between machine intelligence and fusion power — he believes that we haven’t gotten very far in five decades with the latter because we don’t have sufficiently good theory, despite spending billions trying to make it work, and despite fusion power potentially being a really good thing.
Throughout the afternoon there were several comments that alluded to the need for better theory, or at least different approaches, in order to make breakthroughs. Or, as Jeff Davis’ Ray Charles signature quote says “Everything’s hard till you know how to do it.” Kurzweil and Moravec were asked whether if 100 years in the future we knew how to create machine intelligence, we couldn’t run such an intelligence on today’s computers (this followed someone mentioning a tinkertoy computer (but it doesn’t run Linux!)). Both seemed to indicate that today’s computers simply don’t have the storage or horsepower needed. I can understand storage, but given an intelligent program and glacially slow hardware, why can’t it just be really slow?
Another comment in this vein from the audience mentioned that someone (at Sandia?) had created a robot that could walk with only twelve transistors, involving an analog feedback system, wheras it has been extremely hard to get many-MIPS digital-brained computers to walk. Moravec seemed to say that because analog requires some bulk technology, digital nanocomputers would probably be more cost effective even if they must be really complex. Well, yeah, but we don’t have nanocomputers yet. There’s lots of cool stuff remaining to be done with old technology, and I bet it will sometimes be much more cost effective from a development perspective.
Kevin Kelly’s answer to the symposium’s title “Will spiritual robots replace humanity by 2100” was “NO WAY”. His argument, to the extent I caught it (I kind of zoned out for awhile do to extreme thirst) was that machine intelligence will fill lots of specialized niches, some of them niches previously filled by humans, but no machine will completely replace humans. He used as a calculator as a primitive example — it’s much better than any human at arithmatic, but not good for much else. I’m not making the point as eloquently as he did. Perhaps it was the graph with lots of little dots on it, all representing little niches for intelligent entities. At best, he seemed to say, intelligent machines will free humans from having to work.
I also remember Kelly being the first to mention that communicating with intelligent machines of our creation could be a very spiritual thing, much like communicating with “ET” would be. Kurzweil made a similar point several times.
Frank Drake came off as a mildly boring, mildly crackpot case. We’ll judge the aliens intelligence by the size of their radio telescopes, har, har, har.
John Koza said that in numerous attempts to have a genetic program learn to model some tiny aspect of human intelligence or perception, perhaps equivalent to one second of brain activity (I know this doesn’t really make sense, I’m fuzzy on the details and I don’t recall any of the specific cases) that he found he required 10^15 operations (requiring months on standard PCs). So, a “brain second” is 10^15 operations, and this huge number obviously poses a huge barrier to machine intelligence. Or something like that. I’ll have to watch the webcast when it is available, seemed like an interesting point.
Even while listening, I was confused concerning Koza’s argument vis-a-vis the hardness of machine intelligence. It seems (as Kurzweil later pointed out concerning his speech recognition software) that once a genetic program “learns” a desired behavior, it can be copied infinitely, so the operations required to get to a certain level of functioning are mostly irrelevant.
There was lots of good stuff in the discussion and Q&A sessions, but it’s mostly a blur to me. I’ll mention three things I remember:
Kurzweil said that he was using genetic programming to simulate stock traders (presumably using historical data?) Successful trader programs get to recombinate with other successful trader programs. He didn’t mention whether they were making real trades and if so, how successfully. I’m sure lots of people are doing similar research, given the potential payoffs.
A few people mentioned consciousness being a pattern that presumably could be mapped to any substrate. An example, given by either Kurzweil or Moravec, was that of a pattern in a river — the water molecules constantly change, but the pattern may remain for long periods of time. Moravec went even further, saying that perhaps conciousness is an interpretation of a pattern, so if you know what you’re looking for, you could perhaps find conscious patterns, say in rocks, to pick a cliche. Sure, this is run of the mill daydreaming for extropians, but somehow it’s pleasant to hear it in public.
In response to an audience question about spirituality, Joy said that he had read a book (of course!) by E.O. Wilson in which Wilson had hinted at explaining all beliefs, including spiritual beliefs, in physical terms. Joy said, roughly paraphrased, “the game’s changed, they [religious people] just haven’t been told yet.” See, he has some sense! Yeah, he wrote vi too.
After the event let out, I wondered around a bit and laid down under the pleasant sun in the deserted engineering quad. The cirrus clouds above were beautiful and the temperature perfect. The experience was giddy. I rededicated myself to experiencing the wonder of life, even as a mere human, and eagerly look forward to attaining ever giddier heights, perhaps with some technological assistance in the future.
Later I wondered around Palo Alto while waiting for the next Caltrain. I hadn’t been there in a few years. On a saturday night, it’s like fairyland. Healthy and obviously wealthy people literally spilling out of every immaculate restaurant. Someone went out of their way to pick up a pen I dropped in the bustle. Even the sole homeless man seemed to be doing pretty well. Reminded me of Santa Barbara, except that Stanford is where the ocean would be, and the workers aren’t mostly Mexican. Amazing what extraordinary wealth can do. Don’t imagine too many happy faces there today (NASDAQ selloff).