Friday, September 19, 2008

Avast! Here Be a Blindsightinator for Ye!

Aye me hearties, be ye rememberin' that time in Blindsight when Rorschach, she be putting the sun in scurvy Szpindel's eyes?

"Argh, I be seein' naught," Szpindel be sayin', his timbers a'shiver.

"It be the EM fields," James be barking. "That be how they signal. The briney deep, she be fulla words, she be—"

"I be seeing naught," Szpindel be saying. "I be blind as the skipper with his patch on the wrong eye!"

"Yar," Bates be lassooing the capstain. "That be a pretty mess— blast those scurvy rads…"

And then when they be hiding below decks, Szpindel be putting words to it…

"Ya reached for it, ya scurvy dog. You near be catchin' it. That not be blind chance."

"Argh, not blind chance. Blindsight. Amanda? Where be ye, wench?"


"Aye. Nothing be wrong with ye receptors," he be saying. "Eye be working right enough, brain not be seein' the signal. Brain stem, he be mutineer. Arrgh."

Now those buggering cabin-boys from Denmark, they be laying claim to me booty. They be putting out "Action-blindsight in two-legged landlubbers that be having compasses on their skulls, Arggh", and they be staking their claim last winter in the Proceedings of the National Academy of Sciences.

They be asking me to be hanging their guts from the crowsnest, they e'er be blackening my horizon.

Labels: , , ,

Monday, April 14, 2008

Living in the Past.

Most of you here have read Blindsight. Some of you have made it almost to the end. A few have even got as far as the references (I know this, because some of you have asked me questions about them). And so you might remember that old study Libet did back in the nineties, in which it was shown that the body begins to act on a decision a full half-second before the conscious self is aware of having made the decision. A lot of Blindsight's punchline hung on this discovery— because obviously, whatever calls an action into being must precede it. Cause and effect. Hence, the johnny-come-lately sense of conscious volition is bogus. We are not in control. I mean, really: a whole half a second.

Half a second? Chun Siong Soon and his buddies piss on Libet's half a second. Nature Neuroscience just released a study that puts Libet's puny electrodes to shame; turns out the brain is making its decisions up to ten full seconds (typically around seven) before the conscious self "decides" to act.

Ten whole seconds. That's longer than the attention span of a sitting president.

It all comes down to stats. Soon et al took real-time fMRI recordings of subjects before, during, and after a conscious "decision" was made; then they went back and looked for patterns of brain activity prior to that "decision" that correlated with the action that ultimately occurred. What they found was a replicable pattern of brain activity that not only preceded the decision by several seconds, but which also correlated with the specific "decision" made (click a button with the right or the left hand). (Interestingly, these results differ from Libet's insofar as subjects reported awareness of their "decision" prior to the activation of the motor nerves, not afterwards. Whereas Libet's results suggested that action precedes conscious "decision"-making by a very brief interval, Soon et al's suggest that actual decision-making precedes conscious "decision"-making by a much longer one. Bottom line is the same in each case, though: what we perceive as "our" choice has already been made before we're even aware of the options.)

This isn't exactly mind reading. Soon and his buds didn't find a circuit that explicitly controls button-pressing behavior or anything. All they found was certain gross patterns of activity which correlated with future behavior. But we could not read that information if the information wasn't there; in a very real sense, your brain must know what it's going to do long before you do.

Obviously this can't be the whole story. If the lag between processing and perception was always that long, we would feel no sense of personal agency at all. It's one thing to think that you told your muscles to leap from the path of an approaching bus when the time discrepancy is a measly 400 millisecs; but not even organisms with our superlative denial skills could pretend that we were in control if our bodies had leapt clear ten seconds before it even occurred to us to move. So I would think this is more proof-of-principal than day-in-the-life. Still. As IO9 points out, given these results, how long before we can do without that stupid conscious part of us entirely?

Wired's online coverage is a bit more defensive. They bend over backwards to leave open some possibility of free will, invoking the hoary old "maybe free will acts as a veto that lets us stop the unconscious decision." But that's bogus, that's recursive: if consciousness only occurs in the wake of subconscious processing (and how could it be otherwise? How can we think anything before the thinking neurons have fired?), then the conscious veto will have the same kind of nonconscious precursors as the original intent. And since that information would be available sooner at the nonconscious level, it once again makes more sense to leave the pointy-haired boss out of the loop entirely.

But I'm going to take a step back and say that everyone here is missing the point. Neither this study nor Libet's really addressed the question of free will at all. Neither study asked whether the decision-making process was free; they merely explored where it was located. And in both cases, the answer is: in the brain. But the brain is not you: the brain is merely where you live. And you, oh conscious one, don't make those decisions any more than a kidney fluke filters blood.

(Oh, and I've figured out who the Final Cylon is. For real this time. Romo Lambkin's cat.)

Labels: , ,

Friday, April 4, 2008


Inspired by the synergy of my own stuffed, crusty, raw red nose and the long-awaited return of Battlestar Galactica (and if you haven't seen the season premiere yet, what are you wasting time here for? Get onto BitTorrent and start downloading right fucking now, do you hear me?), I am reminded of this little tech item sent courtesy of Alistair Blachford from UBC: the importance of mucous for the optimal functioning of robot noses. It seems that snot is essential to trap and distribute airborne molecules so they can be properly parsed by olfactory sensors. And that in turn reminds me of this earlier article from Science, which reports that sweat might also be an integral part of robot makeup, since evaporative cooling can double the power output of robot servos. The same paper reviews current research in the development of artificial muscles. I wonder how many more wet and sticky and downright organismal traits are going to prove desirable and efficient for our robot overlords. Is it possible that fleshy terminators and death-fetish replicants and even hot Cylon chicks look and taste and feel like us not merely for infiltration purposes, but because form follows function? Do the best robots look like us? Are we the best robots?

Not in every way, I hope. The best robots gotta have better arch support. And it wouldn't kill them to put their visual cabling behind the photoreceptors for a change.

Oh, and those wisdom teeth have got to go.

Labels: ,

Thursday, March 27, 2008

Your Brain is Leaking

This punch-happy little dude has been all over the net for the past week or so: easily the world's coolest crustacean even before then, insofar as how many lifeforms of any stripe can bash their furious little claws through the water so fast (accelerating at over 10,000G!) that the resulting cavitation bubbles heat up to several thousand degrees K? If their ferocious little chelipeds don't take you out, the shockwave alone will shatter you (well, if you're a piece of mantis-shrimp prey, at least).

The reason for their recent fame, though, is this paper in Current Biology, reporting that — alone of all the known species on the planet — these guys can see circular polarised light. And that's just the latest trick of many. These guys see ultraviolet. They see infrared. They can distinguish ten times as many visible-light colors as we can (still only 100,000 — which you'd think would at least shut up those Saganesque idiots from Future Shop who keep blathering about the millions and millions of colors their monitors can supposedly reproduce). Each individual eye has independent trinocular vision. Mantis shrimp eyes are way more sophisticated than any arthropod eye has any right to be.

But what really caught my attention was a line in this Wired article (thanks to Enoch Cheng for the pointer):
"One idea is that the more complicated your sensory structure is, the simpler your brain can be... If you can deal with analysis at the receptor level, you don't have to deal with that in the brain itself."
Which is almost as cool as it is wrong. Cool because it evokes the image of alien creatures with simple or nonexistent brains which nonetheless act intelligently (yes, I'm thinking scramblers), and because these little crustaceans aren't even unique in that regard. Octopi are no slouches in the smarts department either — they're problem solvers and notorious grudge-holders — and yet half of their nervous systems are given over to manual dexterity. Octopi have individual control over each sucker of each tentacle. They can pass a pebble, sucker-to-sucker, from arm-tip to arm-tip. Yet their brains, while large by invertebrate standards, are still pretty small. How much octopus intelligence is embedded in the arms?

So yes, a cool thought. But wrong, I think: because what is all that processing circuitry in the mantis shrimp's eyes if not part of the brain itself? Our own retinas are nothing more than bits of brain that leaked across the back of the eyeball— and if the pattern-matching that takes place in our visual cortices happens further downstream in another species, well, it's still all part of the same computer, right? The only difference is that the modules are bundled differently.

But then this artsy friend points out the obvious analogy with motherboards and buses, and how integrating two components improves efficiency because you've reduced the signal transit time. Which makes me think about the "functional clusters" supposedly so intrinsic to our own conscious experience, and the possibility that the isolation of various brain modules might be in some way responsible for the hyperperformance of savantes1.

So pull the modules apart, the cables between stretching like taffee — how much distance before you're not dealing with one brain any more, but two? Those old split-brain experiments, the alien-hand stuff — that was the extreme, that was total disconnection. But are we talking about a gradient or a step function here? How much latency does it take to turn me into we, and is there anything mushy in between?

Are stomatopod eyes conscious, in some sense? Is my stomach?

1 I would have put a link to the relevant article here, but the incompetent code over at The Economist's website keeps refusing to to open up its online back-issue pdfs until I sign in, even though I already have. Three times now. Anyway, the reference is: Anonymous., 2004. Autism: making the connection. The Economist, 372(8387): 66.

Labels: ,

Thursday, March 6, 2008

Is this theory of yours accepted by any respectable authorities?

The long-awaited new Neuropsychologia's finally on the stands, and it's a theme issue on — wait for it — consciousness! Lots of articles on blindsight, interhemispheric signaling, anosognosia, all that cool stuff. And nestled in the heart of this month's episode is a paper by David Rosenthal entitled "Consciousness and its function".

Guess what. He doesn't think it has any.

From the abstract:
"...a number of suggestions are current about how the consciousness of those states may be useful ... I examine these and related proposals in the light of various empirical findings and theoretical considerations and conclude that the consciousness of cognitive and desiderative states is unlikely to be useful in these or related ways. This undermines a reliance on evolutionary selection pressures in explaining why such states so often occur consciously in humans."
Rosenthal's conclusion? Consciousness is just a side-effect, with no real adaptive value. And no, he didn't cite Blindsight. But we all know I went there first.

Somewhere else I went, back in 1991, has been making a few online waves over the past week or two: this brief Science article by Christner et al, suggesting that microbes play a major and hitherto-unsuspected role in shaping the world's weather. As Jeremy Ruhland pointed out a few days back, this is a wee bit reminiscent of a story I wrote in the early nineties — a post-environmental-apocalypse number in which vast colonies of cloud-dwelling weathermongering microbes had conspired to kick our asses. For a few years now I've been showing this slide whenever I want to make the point that sometimes you can hit the bullseye even when you have no fucking clue what you're talking about...

... because really, "Nimbus" was a spontaneous, unresearched brain fart based entirely on an old girlfriend's observation that "Ooh, look at those clouds... they almost look alive!" But CNN is not exactly the most prestigious source of scientific intel on the planet, and besides, Moffet was just starting to look back in 2002; he hadn't actually found anything. That was then; this is now. You can't get more prestigious than Science (well, unless you're Nature), and now we're looking at actual data.

Of course, this is nowhere near the cozy conjunction of Watts and Rosenthal. Christner et al. didn't even look at clouds per sé, only at the precipitation that had dropped out of them. And it's not like they discovered any new and alien microbes; mostly they came up with plant pathogens. (Also, my microbe-infested clouds had a kind of slow intelligence to them — and if we ever get any evidence supporting that conceit I'll eat my cats.) But what they did show was that microbes affect the weather— and at the very least, that leaves the door open for all sorts of evil, powerful, yet-to-be-discovered bugs lurking overhead.

I like that thought.

Labels: , , ,

Tuesday, October 9, 2007

The View From The Left

This is an ancient review article — about ten years old, judging by the references — but it contains an intriguing insight from split-brain research that I hadn't encountered before: The right hemisphere remembers stuff with a minimum of elaboration, pretty much as it happens. The left hemisphere makes shit up. Mr. Right just parses things relatively agenda-free, while the left hemisphere tries to force things into context.

The left hemisphere, according to Gazzaniga, looks for patterns. Ol' Lefty's on a quest for meaning.

I learned back in undergrad days that our brains see patterns even where none exist; we're pattern-matching machines, is what we are. But I hadn't realized that such functions were lateralized. This hemispheric specialization strikes me as a little reminiscent of "gene duplication": that process by which genetic replication goes occasionally off the rails and serves up two (or more) copies of a gene where only one had existed before. Which is very useful, because evolution can now play around with one of those copies to its heart's content, and as long as the other retains its original function you don't have to worry about screwing up a vital piece of a working system. (This is something the creationists hope you never learn, since it single-handedly blows their whole the-mousetrap-can't-work-unless-all-the-parts-evolve-simultaneously argument right out of the water.) Analogously, I see one hemisphere experimenting with different functions — imagination, the search for meaning— while the other retains the basic just-the-facts-ma'am approach that traditionally served the organism so well.

Anyway, for whatever reason, we've got a pragmatist hemisphere, and a philosopher hemisphere. Lefty, who imposes patterns even on noise, unsurprisingly turns out to be the source of most false memories. But pattern-matching, the integration of scattered data into cohesive working models of The Way Things Are — that's almost another word for science, isn't it? And a search for deeper meanings, for the reasons behind the way things are — well, that's not exactly formal religion (it doesn't involve parasitic social constructs designed to exploit believers), but it is, perhaps, the religious impulse that formal religion evolved to exploit. Which is getting uncomfortably close to saying that neurologically, the scientific and religious impulses are different facets of the same thing.

Yes, all those mush mouthed self-proclaimed would-be reconcilers have been saying that shit for decades. I still bet you never thought you'd read it here.

But bear with. A compulsion to find meaning and order. When there is a pattern to be found, and enough usable data to parse it, the adaptive significance is obvious: you end up using the stars to predict when the Nile is going to flood its banks. If there is no data, or no pattern, you find it anyway, only it's bogus: thunder comes from Zeus, and Noah surfed a tidal bore that carved out the Grand Canyon in an afternoon. Lefty talks in metaphors sometimes, so even when it gets something right it's not the best at communicating those insights— but that's okay, because Mr. Right is just across the hall, unsullied, unspecialized, picking up the slack.

Only what if, now, we're acquiring data that Mr. Right can't handle? The Human brain is not designed to parse the spaces between galaxies or between quarks. The scales we evolved to handle extend up or down a few orders of magnitude, losing relevance at each iteration. Are things below the Planck length really, empirically more absurd than those at everyday classical scales, or is it just that brains shaped to function at one scale aren't very good at parsing the other?

Maybe this is where Lefty really comes into his own. Like the thermoregulating feather that got press-ganged, fully-formed, into flight duty, perhaps the bogus-pattern-matching, compulsive purpose-seeking, religious wetware of the brain is most suited for finding patterns it once had to invent, back before there were enough data available to justify such cosmological pretzel logic. Perhaps the next stage is to rewire Mr. Right in Lefty's image, turn the whole brain into a lateral-parsing parallel-processor. Perhaps the next stage of scientific enquiry can only be conveyed by speaking in tongues, practiced by colonies of monks whose metaphors must be parsed by the nonconscious modules of Siri Keeton and his synthesist siblinghood. Maybe the future is a fusion of the religious and the empirical.

Of course, the obvious rejoinder is: if all this late-breaking twenty-first-century data is enough to let the religious impulse do something useful for a change, why is it that religious fundamentalists are still such colossal boneheads? Why, if delusion has segued into profound insight, do half the Murricans out there still believe that the universe is six thousand years old? Why do two thirds of them believe in angels?

And the obvious answer is that, appearances notwithstanding, these people are not living in the twenty-first century at all, but the fourteenth. They walk among us locked into a cultural diving bell reeled out along the centuries, hermetically sealed, impervious to any facts or insights more recent than the spheroid Earth (or even older, in the case of at least one ignorant cow on The View). I can only wonder what would happen if somehow that brittle armor were to shatter, if all this real data were to wash over them and somehow penetrate the circuitry that informs their spastic gyrations and religious gibbering. Would they serve up a Theory of Everything? Would the rest of us recognize it if they did?

Probably no, and probably not. It's just idle speculation, smoke blown out my mind's ass. Still. Might be a story in it somewhere: the day when religion subsumed science, and It Was Good.

At least no one could accuse me of getting into a rut.

Labels: , ,

Monday, September 3, 2007

Wolbachia cronenbergium

My, the folks over at the Venter Institute have been busy lately. First they changed one microbe species into another by physically replacing its entire genome. They did this in their quest to create a synthetic organism, basically a chassis with the absolute minimum number of genes necessary for life, which could then be loaded up with other customized genes designed to act for the betterment of humanity and the environment the good of Venter stockholders. Now they've discovered that Nature herself has done them one better, by incorporating the complete genome of a parasitic bacterium called Wolbachia into the code of fruit flies: two complete genotypes for the price of one (original article here: much more accessible press release over here).

Some of you may remember ßehemoth, from the rifters books: it was basically mitochondrion's nasty cousin, and like mitochondria it brought its own genome into the host cell. This is a big step further: Wolbachia's code isn't just hanging out in the cell, it's been incorporated into the nuclear DNA of the host itself. The host is not infected with Wolbachia; there are no bacteria cruising the cytoplasm. Rather, the complete recipe for building the bug has been spliced into the host's code— and since the odds of such a big chunk of data (over a megabyte) getting thus incorporated without playing any functional role are pretty small, chances are that this embedded genotype is doing something for the host organism. This is assimilation: the dicks of Borg drones everywhere should be shriveling with collective performance anxiety.

Two major implications come immediately to mind. The first is that conventionally-derived genotypes sequenced to date might be all washed up, since bacterial DNA is routinely scrubbed from such results as "contamination"; but if this new phenomenon is widespread (and Wolbachia is one of the world's most abundant parasites of invertebrates), a lot of the bathwater we've been throwing out might actually be the baby. And the second implication, well —

Anyone remember David Cronenberg's remake of "The Fly"...?

(Illo credit, as far as I can tell, goes to the University of Rochester.)

Labels: , ,

Monday, August 27, 2007

WoW! Pandemic!

Today's post comes on the heels of a) me answering backlogged questions from XFire's gaming community, and b) grumbles from the peanut gallery about the recent lack of shiny techy science-speak on the 'crawl. It just so happens that today's subject combines elements of both, and holy shit is it cool: a paper in Lancet describing the epidemiology of an unintended plague that raged through the World of Warcraft back in 2005 (and thanks to Raymond Nielson for the heads-up). The figures presented in this paper — which, I emphasize, appears in one of the world's most prestigious medical journals — includes a screen shot of corpses in WoW's urban areas.

The plague itself was a glitch: a disease whose original range was supposed to be limited only to areas where high-level players could venture, and which was — again, to high-level players — merely a nuisance. The problem was, the plague cut down low-level players like kibble in a cat-food dish, and as Crichton once observed, Life Will Find A Way.

The bug hitchhiked out of it's original home turf in the blood of high-level characters teleporting back to their hearthstones (analogous, the authors point out, to airline travel in a real-world outbreak). Player's pets got infected, and spread the disease. NPCs, built strong for reasons of game play, acted as infectious reservoirs, not dying themselves but passing the germ on to anyone they came into contact with.

Whole villages were wiped out.

Lofgren and Fefferman point out that this completely unintentional "Corrupt Blood" outbreak was in many ways more realistic than dedicated supercomputer simulations designed to model real epidemics, simply because a real person stood behind each PC in the population. While real-world models have to use statistical functions to caricature human behavior, WoW's outbreak incorporated actual human behaviour (for example, a number of healers spontaneously acted as "first responders", rushing into infected areas to try and help the sick — and in the process spread the bug to other areas when they moved on). It's true that the ability of WoW characters to resurrect introduces a certain level of unrealism into the picture; but it's also true that players generally get so invested in their characters that they don't throw even those renewable lives away unnecessarily. More to the point, the new paradigm doesn't have to be perfect to be a vast improvement over the current state of the art.

L&F suggest that what happened once as a mistake could happen again by design — that MMORPGs could be a valuable tool for real epidemiological studies, by incorporating plausible plagues with known parameters as part of the in-game experience. Players are already used to sickness disease, and death; that's what makes the game so much fun. Do this right, and you could do population-level doomsday studies repeatedly, under controlled conditions, incorporating levels of behavioural realism far beyond what any purely statistical model could manage. Even Mengele didn't have this kind of sample size.

I can see a lot of research being done this way, and not just epidemiological. There are martial and economic possibilities, too. I can see Homeland Security getting involved. I can see national policies increasingly based on insights gleaned from fantasy simulations — and I can see such policies being played from the inside, by mages and blood elves who might have their own agendas to pursue...

Damn. The story almost writes itself.

Labels: , ,

Saturday, June 23, 2007

Nature Nurtures.

The Nature interview went pretty well, after a start-up technical glitch or two. I had a blast. The ideas were thick upon the ground. (I especially liked Ken MacLeod's premise of military robots developing self-awareness on the battlefield due to programming that gave them increasingly-complex theories-of-mind as a means of anticipating enemy behaviour.) I got in references to fellatio, child pornography, and Paris Hilton's enema (a subject which Joan Slonczewski explicitly stated she was not going to run with, or even mention by name.) Oh, and I also talked about, you know, some biology-in-science-fiction stuff. I don't know how much of it will survive the edit, but we'll find out in early July.

But the real cherry on the sundae? I'm not sure how definite this is, but it sounded as though my cat Banana — aka Potato, aka Spudnik — is going to appear in Nature.

My cat. Nature.

I have never been so proud.

Labels: , ,

Thursday, May 10, 2007

The Uplift Protein

Neuropsin, that is. A prefrontal-cortex protein involved in learning and memory. There's this one variant that's peculiar to us Humans, 45 amino acids longer than the standard model handed out to other primates, and a team of Chinese researchers have just nailed the gene that codes for it. And the really cool part? Utterly ignoring all those some-things-man-was-not-meant-to-know types, they spliced the causal mutation into chimpanzee DNA, which then started to synthesise the type-II variant. No word yet on how far they let that stretch of code iterate. No word on how many months away we are from building chimps with human-scale intelligence.

The actual paper isn't out yet. But I'm really hoping my U of T library access is still active when Human Mutation prints the details.

Labels: , ,

Saturday, May 5, 2007


You may have seen this already. It's been out for a few days now. And at first glance it's nothing special: technology controlled by brainwaves through an ECG electrode interface, which is so far behind the cutting edge that you'll be finding it in games before the end of the year. But check out this quote as to why, exactly, the military would even want to develop brain-activated binoculars:

The idea is that EEG can spot "neural signatures" for target detection before the conscious mind becomes aware of a potential threat or target ... In other words, like Spiderman's "spider sense," a soldier could be alerted to danger that his or her brain had sensed, but not yet had time to process.

So. Another end run around the prefrontal cortex in the name of speed and efficiency. I'm telling you, nobody likes the pointy-haired boss these days...

Labels: ,

Thursday, May 3, 2007

The anti-Moore's Law

Anyone who's read my fiction has probably figured out my perspective on life-support/environmental issues. I tend not to talk about such stuff here, not because I don't find it relevant or important, but because it's not new or cutting edge; the non-self-aggrandizing parts of this 'crawl serve as a kind of scratch pad for things I find challenging or thought-provoking in some way, and it's been a while since the science on habitat destruction, species loss, and climate change has done anything but reinforce grim conclusions decades old.

Today, though, I make an exception because of two items in juxtaposition: first, it turns out that the most pessimistic climate-change models were in fact way too naively cheerful, and that the Arctic icecap is melting three times faster than even Cassandra foresaw. And secondly, our ability to monitor such changes is declining thanks to decreasing investment in orbital earth-monitoring programs— to the point where satellites are actually becoming "less capable" over time. The technology is devolving.

And this is a little bit on the new side. Like all the other Children of Brunner, I always knew the place was turning to shit— but I'd at least hoped that technology would let us watch it happen in hi-def.

I keep saying it, but no one believes me: I'm an optimist...

Labels: ,

Wednesday, May 2, 2007

Consciousness, Learning, and Neurochips

I'm starting this new post both to take the weight off the old one (which is growing quite the tail-- maybe I should look into setting up a discussion forum or something), and also to introduce a new piece of relevent research. Razorsmile said

Conscious trains the subconscious until it is no longer needed..

And then Brett elaborated with

that could be how concious thought is adaptive. It doesn't do anything even remotely well, but it can do anything. It is the bridge between something you've never done before and something that you do on skill. which I'll say, sure, that's certainly how it seems subjectively. But I have three flies to stick in that ointment:

1. Given the existence of consciousness to start with, what else could it feel like? Supposing it wasn't actually learning anything at all, but merely observing another part of the brain doing the heavy lifting, or just reading an executive summary of said heavy lifting? It's exactly analogous to the "illusion of conscious will" that Wegner keeps talking about in his book: we think "I'm moving my arm", and we see the arm move, and so we conclude that it was our intent that drove the action. Except it wasn't: the action started half a second before we "decided" to move. Learning a new skill is pretty much the same thing as moving your arm in this context; if there's a conscious homunculus watching the process go down, it's gonna take credit for that process -- just like razorsmile and brett just did-- even if it's only an observer.

2. Given that there's no easy way to distinguish between true "conscious learning" and mere "conscious pointy-haired-boss taking credit for everyone else's work", you have to ask, why do we assume consciousness is essential for learning? Well, because you can't learn without being con--

Oh, wait. We have neural nets and software apps that learn from experience all the time. Game-playing computers learn from their mistakes. Analytical software studys research problems, designs experiments to address them, carry out their own protocols. We are surrounded by cases of intellects much simpler than ours, capable of learning without (as far as we know) being conscious.

3. Finally, I'd like to draw your attention to this paper that came out last fall in Nature. I link to the pdf for completists and techheads, but be warned— it's techy writing at its most opaque. Here are the essential points: they stuck neurochips into the brains of monkeys that would monitor a neuron here and send a tiny charge to this other neuron over there when the first one fired. After a while, that second neuron started firing the way the first one did, without further intervention from the chip. Basically, the chip forces the brain to literally rewire its own connections to spec, resulting in chages to way the monkeys move their limbs (the wrist, in this case).

They're selling it as a first step in rehabilitating people with spinal injuries, impaired motor control, that kind of thing. But there are implications that go far further. Why stop at using impulses in one part of the brain to reshape wiring in another? Why not bring your own set of input impulses to the party, impose your new patterns from an outside source? And why stop at motor control? A neuron is a neuron, after all. Why not use this trick to tweak the wiring responsible for knowledge, skills, declarative memory? I'm looking a little further down this road, and I'm seeing implantable expertise (like the "microsofts" in William Gibson's early novels). I'm looking a little further, and seeing implantable political opinions.

But for now, I've just got a question. People whose limbs can be made to move using transcranial magnetic stimulation sometimes report a feeling of conscious volition: they chose to move their hand, they insist, even though it's incontrovertible that a machine is making them jump. Other people (victims of alien hand syndrome, for example) watch their own two hands get into girly slap-fights with each other and swear they've been possessed by some outside force-- certainly they aren't making their hands act that way. So let's say we've got this monkey, ad we're rewiring his associative cortex with new information:

Does he feel as if he's learning in realtime? Can he feel crystalline lattices of information assembling in his head (to slightly misquote Gibson)? Or is the process completely unconscious, the new knowledge just there the next time it's needed?

I be we'd know a lot more about this whole consciousmess thing, if we knew the answer to that.

Labels: ,

Friday, April 27, 2007

Blindsight (the malady, not the book): better than the other kind?

Now here's a fascinating study: turns out that victims of blindsight can see better than so-called "healthy" individuals. At least, one fellow with a patchy version of the condition was able to detect subtler visual cues in his blind field than in his sighted one. (Here's the original paper: here's a summary.) This suggests that certain "primitive" traits in our neurological evolution didn't so much disappear as get ground beneath the boots of more recent circuitry, and that — once released from those Johnny-come-lately overlays — they come off the leash. And primitive or not, they're better than what came after.

Or in other words, once again, the reptile brain could really shine if the pointy-haired homunculus would just get the hell out of the way.

I wrote a story back in the nineties with a similar punchline — that the hindbrain was still alive in its own right, still potentially autonomous, and that only after the neocortex had died was it able to wake up, look around, and scream in those last brief moments before it too expired. But now I'm thinking I didn't go far enough — because after all, who's to say the reptile brain has to die when the upper brain does? I mean sure, we've got the Terry Schiavos and the other fleshy rutabagas of the world, clusters of organs and bed sores on life support. But we've also got the schizophrenics, who hear voices and won't meet our eyes and whose frontal lobes are smaller than most would consider normal. And most frighteningly of all, we've got these other folks, people with heads full of fluid, mid- and hindbrains intact, cerebra reduced to paper-thin layers of neurons lining the insides of empty skulls — wandering through life as engineers and schoolteachers, utterly unaware of anything at all out of the ordinary until that fateful day when some unrelated complaint sends them into an MRI machine and their white-faced doctors say, Er, well, the good news is it can't be a brain tumor because...

There's a range, in other words. You don't need anywhere near a complete brain to function in modern society (in fact, there are many obvious cases in which having a complete brain seems to be an actual disadvantage). And in a basic survival sense, the ability to write and appreciate the music of Jethro Tull and do other "civilised" things aren't really that important anyway.

So now I'm thinking, tewwowist virus: something engineered to take out higher brain functions while leaving the primitive stuff intact. Something that eats away at your cognitive faculties and lets your inner reptile off the leash, something that strips your topheavy mind down to its essentials, something that speeds your reflexes and cranks your vision even as it takes the light from your eyes.

I'm thinking zombies. Not the shuffling Romero undead or the sentient philosopher's metaphor, not even the drug-addled brain-damaged pseudoresurrectees of the real-world Caribbean. I'm thinking something faster and more rigorous and more heartbreaking, far more dangerous and far tougher to kill, and I'm thinking hey, if I can do it for vampires...

I'm also thinking of writing another book.

Labels: ,

Wednesday, April 25, 2007

"It's 20 light years away. We can go there."

Now that's the kind of attitude I like to see coming from a legitimate authority-- to wit, Dimitar Sasselov of the Harvard-Smithsonian Center for Astrophysics, quoted in today's NY Times. He was talking about Gliese 581c, a potentially earth-type planet orbiting a dim red dwarf in the constellation of Libra. 1.5 time Earth's radius; 5 times the mass. Mean temperature somewhere between 0 and 40°C, solidly in the Goldilocks Zone for liquid water. A type of planet thought by Sasselov to be not only congenial to life, but more congenial than Earth.

Of course, you probably know this already. It's on boingboing, after all, and Yahoo, and and Nature, and a thousand other websites. (Science, my usual go-to source for this kind of thing, is still asleep at the wheel as of this posting.) What you probably don't know, however, is that there's a pretty specific real-world connection between Gliese 581c and Blindsight.

You see, we don't really know all that much about 581c yet. We got a mass, and we got a distance-from-primary, and we got an orbital period (11 days), and we got all of that by watching Gliese 581 wobbling slightly as its planets tugged gravitationally on its sleeve. We don't even know if 581c has an atmosphere, and if so, whether it's closer to ours or Venus's.

But there are plans to find out, and they involve the use of a suitcase-sized Canadian satellite called MOST (also known as "The Humble", by virtue of its teensy dinner-plate of a mirror). Despite its small physical size, MOST is well-suited for picking up the atmospheric signatures of extrasolar planets, and it'll be turning its glassy eye towards Libra in the near future. The Principle Investigator behind the MOST is a guy name of Jaymie Matthews, who acted as my unpaid astrophysics consultant (well, paid in pizza and beer, I guess) for Blindsight.

And now, after helping me chase aliens through my own brainstem, he's gonna be looking for real ones at Gliese 581. How cool is that?

Labels: , , ,

Monday, April 23, 2007

Another Step Towards the Maelstrom

Those of you who read Maelstrom might remember what that book was named for: the frenetic chainsaw fast-forward jungle that the Internet had evolved into, infested by the virtual predators and parasites that evolved after we gave genes to spambots and let them breed at 50 generations/sec. (Those of you who didn't read Maelstrom can still give it a shot, if you're up for the challenge.) Here's another benchmark on the way to that future: net bots competing for host machines to zombify, repairing the security holes that they themselves exploited so that competitors can't get in the same way. Imagine a beast that actually installs necessary Windows patches onto your machine-- but only after it's already built anest behind your firewall. It's vaguely reminiscent of those male insects with genitals that look like pedestals of dental instruments: once they inseminate the female, they secrete a kind of crazy glue and spatula it over her genital pore to keep competitors from messing with their sperm. Or the even cooler (albeit possibly apocryphal) case of reproductive homosexual rape in hanging flies; the really successful males don't even bother to inseminate females directly, they bugger other males. Their sperm then migrate to the gonads of their victim, and when said victim finally makes it with a female, he inseminates her with the sperm of the male who raped him. (More than one clergyman has told me that you can learn a lot about the mind of God by studying His creations. I wonder what they'd make of these guys.)

Of course, this is still special creation, not evolution. The bots are intelligently designed; nobody's given them genes yet (or perhaps the coders themselves are a kind of "extended genotyope", albeit a Lamarkian one. Life always hits you upside the head with this recursive chicken/egg stuff whenever you look too closely.) (Hey-- maybe there's a story in that...)

Still, it's another step in the right direction. It's part of the arms race. Only a matter of time before someone figures out that a random number generator and a tilt bit here and there can unleash these things to evolve on their own, without always having to get respawned from the shop.

Personally, I think they're taking way too long. I can hardly wait to see what happens.

(Thanks to Raymond Neilson and Alistair Blachford for the link.)

Labels: , ,