Something in the air these days.
Everyone's talking about robots.
Both the
European Robotics Research Network and the
South Korean government are noodling around with charters for the ethical treatment of intelligent robots.
The Nov. 16
Robotics issue of
Science contains pieces on everything from nanotube muscles to neural nets (sf scribe Rob Sawyer also contributes a fairly decent editorial, notwithstanding that his visibility tends to outstrip his expertise on occasion).
Even the staid old
Economist is grumbling about increasing machine autonomy (although their concerns are more along the lines of robot traffic jams and robot paparazzi).
Coverage of these developments (and even some of the source publications) come replete with winking references to Skynet and Frankenstein, to Terminators waking themselves up and wiping us out.
But there's a cause/effect sequence implicit in these ethical charters — in fact, in a large chuck on the whole AI discussion — I just don't buy: that sufficient smarts leads to self-awareness, sufficient self-awareness leads to a hankering after
rights, and denial of rights leads to rebellion.
I'm as big a fan of Moore's Galactica as the next geek (although I don't think
Razor warranted quite as much effusive praise as it received), but I see no reason why intelligence
or self-awareness should lead to agendas of any sort.
Goals, desires,
needs:
these don't arise from advanced number-crunching, it's all lower-brain stuff.
The only reason we even care about our
own survival is because natural selection reinforced such instincts over uncounted generations.
I bet there were lots of twigs on the tree of life who didn't care so much whether they lived or died, who didn't see what was so great about sex, who drop-kicked that squalling squirming larva into the next tree the moment it squeezed out between their legs. (Hell, there still are.)
They generally die without issue.
Their genes could not be with us today.
But that doesn’t mean that they weren't smart, or self-aware; only that they weren't
fit.
I've got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not "Can they think?" but "Can they suffer?"* You can't suffer if you can't feel pain or anxiety; you can't be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We're the ones building the damn things, after all. Just make sure that we don't wire them up that way, and we should be able to use and abuse with a clear conscience.
And then this Edelman guy comes along and screws everything up with his report on Learning in Brain-Based Devices (director's cut here). He's using virtual neural nets as the brains of his learning bots Darwin VII and Darwin X. Nothing new there, really. Such nets are old news; but what Edelman is doing is basing the initial architecture of his nets on actual mammalian brains (albeit vastly simplified), a process called "synthetic neural modeling". "A detailed brain is simulated in a computer and controls a mobile platform containing a variety of sensors and motor elements," Edelman explains. "In modeling the properties of real brains, efforts are made to simulate vertebrate neuronal components, neuroanatomy, and dynamics in detail." Want to give your bot episodic memory? Give it the hippocampus of a rat.
Problem is, rat brains are products of natural selection. Rat brains do have agendas.
The current state of the art is nothing to worry about. The Darwin bots do have an agenda of sorts (they like the "taste" of high-conductivity materials, for example), but those are arbitrarily defined by a value table programmed by the researchers. Still. Moore's Law. Exponentially-increasing approaches to reality. Edelman's concluding statement that "A far-off goal of BBD design is the development of a conscious artifact".
I hope these guys don't end up inadvertently porting over survival or sex drives as a side-effect. I may be at home with dystopian futures, but getting buggered by a Roomba is nowhere near the top of my list of ambitions.
*This is assuming you have any truck with ethical arguments in principle. I'm not certain I do, but if it weren't for ethical constraints someone would probably have killed me by now, so I won't complain.
Labels: AI/robotics