Friday, October 3, 2008

Head Cheese Gone Wild

I was plenty pleased when little porridges of cultured neurons took their first baby steps towards running flight simulators or operating robots in the lab; I was downright smug when folks noticed that I'd got there first. Now, though, researchers from the Missouri University of Science and Technology are planning on putting head cheeses in charge of real-world power grids in half a dozen countries, including China and Mexico (but not including, interestingly enough, the United States). According to this article, "…these networks could control not only power systems, but also other complex systems, such as traffic-control systems or global financial networks."

Traffic control systems. Financial networks. Being run by meaty neuron networks whose thought processes are, by definition, opaque. For real.

I wrote a trilogy about just this scenario. It did not end well (just ask Kirkus). Maybe someone could pass a copy on to this Venayagamoorthy dude.

Next up, two papers in today's issue of Science: one on the evolution of religious belief, the other on the perception of imaginary patterns under conditions of perceived helplessness. These dovetail nicely with some slightly staler findings on the arrogance of stupid people, the inherent fear responses of political conservatives, and last night's competing North-American neocon/centrist debates. But I have to actually watch those debates before I blog on that. (I was out at Don Giovanni last night. I didn't even know that they had dry-ice smoke machines in 1787…)

Labels: , , ,

Friday, April 4, 2008

Cybersnot

Inspired by the synergy of my own stuffed, crusty, raw red nose and the long-awaited return of Battlestar Galactica (and if you haven't seen the season premiere yet, what are you wasting time here for? Get onto BitTorrent and start downloading right fucking now, do you hear me?), I am reminded of this little tech item sent courtesy of Alistair Blachford from UBC: the importance of mucous for the optimal functioning of robot noses. It seems that snot is essential to trap and distribute airborne molecules so they can be properly parsed by olfactory sensors. And that in turn reminds me of this earlier article from Science, which reports that sweat might also be an integral part of robot makeup, since evaporative cooling can double the power output of robot servos. The same paper reviews current research in the development of artificial muscles. I wonder how many more wet and sticky and downright organismal traits are going to prove desirable and efficient for our robot overlords. Is it possible that fleshy terminators and death-fetish replicants and even hot Cylon chicks look and taste and feel like us not merely for infiltration purposes, but because form follows function? Do the best robots look like us? Are we the best robots?

Not in every way, I hope. The best robots gotta have better arch support. And it wouldn't kill them to put their visual cabling behind the photoreceptors for a change.

Oh, and those wisdom teeth have got to go.

Labels: ,

Sunday, February 24, 2008

Our Souls in a New Machine

A couple of items hit me within the same 24-hour period this week: a little humanoid robot that performs interpretive dance numbers based on the brainwave and REM patterns of sleeping humans, and a noncorporeal digital artiste that builds paintings inspired by phrases we meat puppets offer up to it. I've seen human paintings and dances that do a lot less for me than these curious bits of software.

So what are we witnessing here? Is this just business as usual, artists using tools (is there that much difference between writing code and wielding a paintbrush)? So far, I think that's the case. But I also think something more — I don't know, symbiotic — might be peeking around the corner. We're getting awfully close to the point where we stop using apps as tools and start teaching them to use tools...

Labels:

Friday, November 30, 2007

In Praise of Slavery

Something in the air these days. Everyone's talking about robots. Both the European Robotics Research Network and the South Korean government are noodling around with charters for the ethical treatment of intelligent robots. The Nov. 16 Robotics issue of Science contains pieces on everything from nanotube muscles to neural nets (sf scribe Rob Sawyer also contributes a fairly decent editorial, notwithstanding that his visibility tends to outstrip his expertise on occasion). Even the staid old Economist is grumbling about increasing machine autonomy (although their concerns are more along the lines of robot traffic jams and robot paparazzi). Coverage of these developments (and even some of the source publications) come replete with winking references to Skynet and Frankenstein, to Terminators waking themselves up and wiping us out.

But there's a cause/effect sequence implicit in these ethical charters — in fact, in a large chuck on the whole AI discussion — I just don't buy: that sufficient smarts leads to self-awareness, sufficient self-awareness leads to a hankering after rights, and denial of rights leads to rebellion. I'm as big a fan of Moore's Galactica as the next geek (although I don't think Razor warranted quite as much effusive praise as it received), but I see no reason why intelligence or self-awareness should lead to agendas of any sort. Goals, desires, needs: these don't arise from advanced number-crunching, it's all lower-brain stuff. The only reason we even care about our own survival is because natural selection reinforced such instincts over uncounted generations. I bet there were lots of twigs on the tree of life who didn't care so much whether they lived or died, who didn't see what was so great about sex, who drop-kicked that squalling squirming larva into the next tree the moment it squeezed out between their legs. (Hell, there still are.) They generally die without issue. Their genes could not be with us today. But that doesn’t mean that they weren't smart, or self-aware; only that they weren't fit.

I've got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not "Can they think?" but "Can they suffer?"* You can't suffer if you can't feel pain or anxiety; you can't be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We're the ones building the damn things, after all. Just make sure that we don't wire them up that way, and we should be able to use and abuse with a clear conscience.

And then this Edelman guy comes along and screws everything up with his report on Learning in Brain-Based Devices (director's cut here). He's using virtual neural nets as the brains of his learning bots Darwin VII and Darwin X. Nothing new there, really. Such nets are old news; but what Edelman is doing is basing the initial architecture of his nets on actual mammalian brains (albeit vastly simplified), a process called "synthetic neural modeling". "A detailed brain is simulated in a computer and controls a mobile platform containing a variety of sensors and motor elements," Edelman explains. "In modeling the properties of real brains, efforts are made to simulate vertebrate neuronal components, neuroanatomy, and dynamics in detail." Want to give your bot episodic memory? Give it the hippocampus of a rat.

Problem is, rat brains are products of natural selection. Rat brains do have agendas.

The current state of the art is nothing to worry about. The Darwin bots do have an agenda of sorts (they like the "taste" of high-conductivity materials, for example), but those are arbitrarily defined by a value table programmed by the researchers. Still. Moore's Law. Exponentially-increasing approaches to reality. Edelman's concluding statement that "A far-off goal of BBD design is the development of a conscious artifact".

I hope these guys don't end up inadvertently porting over survival or sex drives as a side-effect. I may be at home with dystopian futures, but getting buggered by a Roomba is nowhere near the top of my list of ambitions.



*This is assuming you have any truck with ethical arguments in principle. I'm not certain I do, but if it weren't for ethical constraints someone would probably have killed me by now, so I won't complain.

Labels: