Wednesday, January 7, 2009

Iterating Towards Bethlehem

Most of you probably know about Turing machines: hypothetical gizmos built of paper punch-tape, read-write heads, and imagination, which can — step by laborious step — emulate the operation of any computer. And some of you may be old enough to remember the Sinclair ZX-80— a sad little personal computer so primitive that it couldn't even run its video display and its keyboard at the same time (typing would cause the screen to go dark). Peer into the darkness between these artifacts, stir in a little DNA, and what do you get?

This hairy little spider right here. A pinpoint brain with less than a million neurons, somehow capable of mammalian-level problem-solving. And just maybe, a whole new approach to cognition.

This is an old story, and a popsci one, although I've only discovered it now (with thanks to Sheila Miguez) in a 2006 issue of New Scientist. I haven't been able to find any subsequent reports of this work in the primary lit. So take it with a grain of salt; as far as I know, the peer-reviewers haven't got their talons into it yet. But holy shit, if this pans out…

Here's the thumbnail sketch: we have here a spider who eats other spiders, who changes her foraging strategy on the fly, who resorts to trial and error techniques to lure prey into range. She will brave a full frontal assault against prey carrying an egg sac, but sneak up upon an unencumbered target of the same species. Many insects and arachnids are known for fairly complex behaviors (bumblebees are the proletarian's archetype; Sphex wasps are the cool grad-school example), but those behaviors are hardwired and inflexible. Portia here is not so rote: Portia improvises.

But it's not just this flexible behavioral repertoire that's so amazing. It's not the fact that somehow, this dumb little spider with its crude compound optics has visual acuity to rival a cat's (even though a cat's got orders of magnitude more neurons in one retina than our spider has in her whole damn head). It's not even the fact that this little beast can figure out a maze which entails recognizing prey, then figuring out an approach path along which that prey is not visible (i.e., the spider can't just keep her eyes on the ball: she has to develop and remember a search image), then follow her best-laid plans by memory including recognizing when she's made a wrong turn and retracing her steps, all the while out of sight of her target. No, the really amazing thing is how she does all this with a measly 600,000 neurons— how she pulls off cognitive feats that would challenge a mammal with seventy million or more.

She does it like a Turing Machine, one laborious step at a time. She does it like a Sinclair ZX-80: running one part of the system then another, because she doesn't have the circuitry to run both at once. She does it all sequentially, by timesharing.

She'll sit there for two fucking hours, just watching. It takes that long to process the image, you see: whereas a cat or a mouse would assimilate the whole hi-res vista in an instant, Portia's poor underpowered graphics driver can only hold a fraction of the scene at any given time. So she scans, back and forth, back and forth, like some kind of hairy multilimbed Cylon centurion, scanning each little segment of the game board in turn. Then, when she synthesizes the relevant aspects of each (God knows how many variables she's juggling, how many pencil sketches get scribbled onto the scratch pad because the jpeg won't fit), she figures out a plan, and puts it into motion: climbing down the branch, falling out of sight of the target, ignoring other branches that would only seem to provide a more direct route to payoff, homing in on that one critical fork in the road that leads back up to satiation. Portia won't be deterred by the fact that she only has a few percent of a real brain: she emulates the brain she needs, a few percent at a time.

I wonder what the limits are to Portia's painstaking intellect. Suppose we protected her from predators1, and hooked her up to a teensy spider-sized glucose drip so she wouldn't starve. It takes her a couple of hours to capture a snapshot; how long will it take the fuzzy-legged little beauty to compose a sonnet?

Are we looking at a whole new kind of piecemeal, modular intellect here? And why the hell didn't I think of it first?

Update 9/1/08: Tarsitano & Jackson published these results in Animal Behaviour. Thanks to Kniffler for the heads-up

1 And isn't that a whole other interesting problem, how this little beast can sit contemplating her pedipalps for hours on end in a world filled with spider-eating predators? Do certain antipredator reflexes stay active no matter what, or does she just count on immobility and local cover to hide her ass while she's preoccupied with long-term planning? I'd love to see the cost-benefit of this tradeoff.

Portia photo: by Akio Tanikawa, scammed from Wikipedia under a CC licence.
Maze illo: scammed from New Scientist, under a nine-tenths-of-the-law licence.

Labels: , ,

Friday, October 3, 2008

Head Cheese Gone Wild

I was plenty pleased when little porridges of cultured neurons took their first baby steps towards running flight simulators or operating robots in the lab; I was downright smug when folks noticed that I'd got there first. Now, though, researchers from the Missouri University of Science and Technology are planning on putting head cheeses in charge of real-world power grids in half a dozen countries, including China and Mexico (but not including, interestingly enough, the United States). According to this article, "…these networks could control not only power systems, but also other complex systems, such as traffic-control systems or global financial networks."

Traffic control systems. Financial networks. Being run by meaty neuron networks whose thought processes are, by definition, opaque. For real.

I wrote a trilogy about just this scenario. It did not end well (just ask Kirkus). Maybe someone could pass a copy on to this Venayagamoorthy dude.

Next up, two papers in today's issue of Science: one on the evolution of religious belief, the other on the perception of imaginary patterns under conditions of perceived helplessness. These dovetail nicely with some slightly staler findings on the arrogance of stupid people, the inherent fear responses of political conservatives, and last night's competing North-American neocon/centrist debates. But I have to actually watch those debates before I blog on that. (I was out at Don Giovanni last night. I didn't even know that they had dry-ice smoke machines in 1787…)

Labels: , , ,

Sunday, September 28, 2008


Yeah, I know. Merciful extended silence again.

Not that there's nothing to talk about. There's a paper just out in Consciousness & Cognition which purports to prove that logical thinking requires consciousness (which would seem to contradict other findings, but I haven't read the paper yet so who knows). I've been ruminating on the inherent and hardwired dumbness of electorates throughout this continent, and various recent neurological findings — not to mention archival analysis of "Hardy Boys" novels — that might cast some light on why this would be. My name seems to be getting cited as an exemplar of Gloom in an online squabble about "The New Dismal" in science fiction. And at long long last, I sent my first tale of the intrepid and grumpy starfarer Sunday Ahzmundin off to Gardner Dozois, who received it with somewhat greater enthusiasm than I was expecting, so that's good. (Thanks again to Ray for pointing out the inconsistencies in the penultimate draft of that story, and to all those others out there who threw rocks at him. You can stop now.)

But for various reasons — not the least being the necessity to prepare for a course that will probably end up being cancelled anyway, but which I have to gear up for regardless because we're only one registrant away from critical mass and the damn thing starts on Wednesday if it starts at all — I haven't had time to set all that stuff to screen yet. So in the meantime I'll simply point out that the broken Fizerpharm Vampire Domestication slideshow has at last been fixed, and is running again over here*.

*It is not yet running over on the Backlist page, though; that's a different Flash file, which I'll get around to fixing in turn eventually

Labels: , ,

Friday, September 19, 2008

Avast! Here Be a Blindsightinator for Ye!

Aye me hearties, be ye rememberin' that time in Blindsight when Rorschach, she be putting the sun in scurvy Szpindel's eyes?

"Argh, I be seein' naught," Szpindel be sayin', his timbers a'shiver.

"It be the EM fields," James be barking. "That be how they signal. The briney deep, she be fulla words, she be—"

"I be seeing naught," Szpindel be saying. "I be blind as the skipper with his patch on the wrong eye!"

"Yar," Bates be lassooing the capstain. "That be a pretty mess— blast those scurvy rads…"

And then when they be hiding below decks, Szpindel be putting words to it…

"Ya reached for it, ya scurvy dog. You near be catchin' it. That not be blind chance."

"Argh, not blind chance. Blindsight. Amanda? Where be ye, wench?"


"Aye. Nothing be wrong with ye receptors," he be saying. "Eye be working right enough, brain not be seein' the signal. Brain stem, he be mutineer. Arrgh."

Now those buggering cabin-boys from Denmark, they be laying claim to me booty. They be putting out "Action-blindsight in two-legged landlubbers that be having compasses on their skulls, Arggh", and they be staking their claim last winter in the Proceedings of the National Academy of Sciences.

They be asking me to be hanging their guts from the crowsnest, they e'er be blackening my horizon.

Labels: , , ,

Monday, September 15, 2008

Pedophilia in a Pill

You may remember the case a few years back of the Floridian hypersexual pedophile whose depravity hailed from a brain tumor; the dude (rightly) got off, since he wasn't culpable for the wiring in his head. You may even remember me taking the next step (scroll down to June 30th on the right-hand side), and remarking that the tumor didn't really make a difference— nobody is responsible for the way their heads are wired, and the legal system had taken the first step (again, rightly) towards acknowledging that the very concept of culpability, while convenient, is neurologically unsound.

Exhibit B*: Phillip Carmichael, a former Oxfordshire headmaster and pedophile, exonerated after a court decided that his extensive collection of child porn had been amassed while under the influence of prescription drugs. Once again we see evidence that we are mechanical. The very phrase "control yourself" is dualist at its heart, a logical impossibility. It conjures up images of a driver fighting to stop a careening car with bad brakes. But the fact is, there is no driver. There is only the car— we are the car— and when the brake lines have been cut, careening is just what cars do. Medical professionals prescribed a bunch of pills to this man, and they literally turned him into someone else.

You might think that this would make people feel a bit more kindly towards natural-born kiddy-diddlers. After all, if it's a chemical that turn you into a pervert, you're not really culpable, are you? You're taking the same drugs Carmichael was; the only difference is that they're not being produced by the factory Pharm down the road, they're being produced in your own head. If anything, natural-born pedophiles have even less choice in the matter than did our Exhibit B; at least Carmichael could have chosen more competent medical council.

I would be willing to bet, though, that most people would not think more kindly of pedophiles after performing this thought experiment, and in fact most people would vilify and shout down anyone who dared to make excuses for these monsters. Anything to do with kids is, by definition, a motherhood issue; and motherhood issues by definition turn us into irrational idiots.

But our legal systems generally define culpability in terms of whether offenders know that their acts are against the law, and by that standard I guess some kind of punishment is called for. Still. Let's at least be consistent about it, shall we? We know that a human system called Phillip Carmichael deliberately broke the law; it just wasn't the same Phillip Carmichael who ended up in court after the drugs were withdrawn. That Carmichael had been rebooted back into a benign, Linux sort of personality. The evil child-molesting Microsoft OS had been wiped. So if you want to be consistent about this, put Carmichael back on drugs until the guilty iteration reappears. Then put him in jail.

At least you'd know you have the right guy.

*Thanks to Nas Hedron for the link

Labels: ,

Friday, August 22, 2008

A Plague of Angels (or, Rorschach in your living room!)

Well, this is interesting. Intel has leapfrogged MIT on the whole magnetic-resonance schtick. They can wirelessly light a 60-watt bulb from almost a meter away, wasting only 25% of the broadcast energy in transit. This is a good thing, because "…the human body is not affected by magnetic fields," Josh Smith from Intel reassures us. "It is affected by electric fields. So what we are doing is transmitting energy using the magnetic field not the electric field." And I have to admit, it's heartening that the whole zapped-by-the-arc problem that electrocuted so many early-adopters seems to be a thing of the past.

I just have two teensy, niggling questions.

First up, in a world in which Peak Oil also seems to be a thing of the past — and in which the inextricably-linked issues of energy security and climate change grow increasingly troubling to anyone who isn't a) Michael Crichton and/or b) convinced that the Rapture will spirit them away and save their asses before the bill comes due — do we really want to be celebrating a technology that wastes a quarter of its kick before it even reaches its destination? Yes, the technology will improve over time; yes, efficiency will increase. But we're still talking about an omnidirectional broadcast here; even if the bulk of the signal strength passes in one direction, there's still going to be at least some wasted energy going out along the whole 360.

More to the point though, is Smith's confident assertions that "the human body is not affected by magnetic fields". Maybe he's talking about a different model of human body. Maybe the model he's talking about comes with a Faraday cage built into the skull, and is not susceptible to the induction of religious rapture1, selective blindness2, or the impaired speech and memory effects3,4 that transcranial magnetic stimulation can provoke in our obsolete ol' baseline brains.

Or maybe, once Intel gets its way and this "worldchanging" technology saturates our living space with directed magnetic fields, we'll all just start seeing things, bumping into chairs, vomiting from inexplicable bouts of spontaneous nausea, and freaking out at the sight of angels and aliens5 swarming through our living rooms.

Granted, so far you have to sit down in a lab and wear a magnetic hair-net to experience the effects I've described. But I wonder how many appliance-feeding magnetic-resonance transmitters we'll be able to load into our apartments before hallucinogenic hotspots start spontaneously appearing in our living rooms. At which point our local utility will reclassify these side-effects from "bug" to "feature", and add a small additional charge for "multisensory entertainment" onto our monthly power bill.

I'm actually kind of looking forward to it. It's bound to be cheaper than cable.

(Photo credit: Australian PC Authority)

Ramachandran, V.S., and Blakeslee, S. 1998. Phantoms in the Brain: Probing the Mysteries of the Human Mind. William Morrow, New York.
Kamitani, Y. and Shimojo, S. 1999. Manifestation of scotomas created by transcranial magnetic stimulation of human visual cortex. Nature Neuroscience 2: 767-771.
Hallett, M. 2000. Transcranial magnetic stimulation and the human brain. Nature 406: 147-150.
Goldberg, C. 2003. Zap! Scientist bombards brains with super-magnets to edifying effect. Boston Globe 14/1/2003, pE1.
Persinger, M.A. 2001 The Neuropsychiatry of Paranormal Experiences. J Neuropsychiatry & Clinical Neuroscience 13: 515-524.

Labels: ,

Monday, July 28, 2008

Got Another One!

Nature published "Hillcrest v. Velikovsky" last week — and the very next day, this cog-sci dude named Mike Meadon posted an erudite and outraged blog entry on the insanity of the kind of world we live in, that such things could actually happen. Evidently he didn't realize that the work was fiction (until the famous PZ Meyers gently pointed it out). And to give the man his due, his subsequent post was all mea culpaey, and he left the original posting intact as an object lesson on the virtues of skepticism about skepticism.

This is not the first time I've managed to get smart people to believe dumb things (although this may be the first time I've done so without meaning to). I used to do it all the time. Back in the day, a friend and I used some judicious if low-tech special effects to convince a visiting Brazilian scientist that the Deer Island house we were staying in was haunted. When all the blinds in her room shot up simultaneously at three a.m., I swear she never touched a single step on her way downstairs and out the door. She not only refused to step back inside the house, she high-tailed it right off the island. Did the rest of her field work out of Grand Manan. (In hindsight, we actually felt kind of bad about that.)

But perhaps my proudest moment was during my doctorate, when I convinced a couple of fellow grad students (in arts, granted, but still) that whenever I went into the field I had to strip naked and glue yellow sponges all over my body, because harbour seals couldn't see yellow wavelengths. (Why not just wear yellow clothes? you ask. Why, because it would have to be yellow rain gear — given the wet field environment — and rain gear is slick, i.e. reflective, i.e. the seals would still be able to see the glare if not the actual colour.) My victims were astonished, and profoundly impressed by my dedication to the cause — "There has to be a better way", they insisted — but when I begged them to name one ("because seriously, those fuckers hurt when you rip them off"), they came up blank. Nice Matisse t-shirts, though.

Of course, the word gets around. These days, all I have to do is open my mouth and pretty much anyone who knows me will accuse me of trying to bullshit them. Still. I'm frequently astonished at how easy it is to Punk the People. I'm finally getting around to reading Nassim Nicholas Taleb's The Black Swan, which takes way too long to get to the point but which makes a similar point: we as a species often believe the most absurd things as long as there's some kind of narrative attached. We are pattern-matchers, because patterns allow us to distill the environment into a series of simple rules. So we see patterns whether they exist or not, and stories that tie causes to any given phenomenon (I glue yellow sponges onto my naked body because harbour seals can't see yellow) are a lot more believable than those which simply report the same phenomenon in isolation (I glue yellow sponges onto my naked body). We are engines in search of narrative. Evidently this goes a long way towards explaining the inanity of most CNN headlines.

Not sure I buy it completely, though. If the telling of stories were really so central to the human condition, you'd think those of us who did it for a living would at least get a decent dollar out of it.

Labels: ,

Monday, April 14, 2008

Living in the Past.

Most of you here have read Blindsight. Some of you have made it almost to the end. A few have even got as far as the references (I know this, because some of you have asked me questions about them). And so you might remember that old study Libet did back in the nineties, in which it was shown that the body begins to act on a decision a full half-second before the conscious self is aware of having made the decision. A lot of Blindsight's punchline hung on this discovery— because obviously, whatever calls an action into being must precede it. Cause and effect. Hence, the johnny-come-lately sense of conscious volition is bogus. We are not in control. I mean, really: a whole half a second.

Half a second? Chun Siong Soon and his buddies piss on Libet's half a second. Nature Neuroscience just released a study that puts Libet's puny electrodes to shame; turns out the brain is making its decisions up to ten full seconds (typically around seven) before the conscious self "decides" to act.

Ten whole seconds. That's longer than the attention span of a sitting president.

It all comes down to stats. Soon et al took real-time fMRI recordings of subjects before, during, and after a conscious "decision" was made; then they went back and looked for patterns of brain activity prior to that "decision" that correlated with the action that ultimately occurred. What they found was a replicable pattern of brain activity that not only preceded the decision by several seconds, but which also correlated with the specific "decision" made (click a button with the right or the left hand). (Interestingly, these results differ from Libet's insofar as subjects reported awareness of their "decision" prior to the activation of the motor nerves, not afterwards. Whereas Libet's results suggested that action precedes conscious "decision"-making by a very brief interval, Soon et al's suggest that actual decision-making precedes conscious "decision"-making by a much longer one. Bottom line is the same in each case, though: what we perceive as "our" choice has already been made before we're even aware of the options.)

This isn't exactly mind reading. Soon and his buds didn't find a circuit that explicitly controls button-pressing behavior or anything. All they found was certain gross patterns of activity which correlated with future behavior. But we could not read that information if the information wasn't there; in a very real sense, your brain must know what it's going to do long before you do.

Obviously this can't be the whole story. If the lag between processing and perception was always that long, we would feel no sense of personal agency at all. It's one thing to think that you told your muscles to leap from the path of an approaching bus when the time discrepancy is a measly 400 millisecs; but not even organisms with our superlative denial skills could pretend that we were in control if our bodies had leapt clear ten seconds before it even occurred to us to move. So I would think this is more proof-of-principal than day-in-the-life. Still. As IO9 points out, given these results, how long before we can do without that stupid conscious part of us entirely?

Wired's online coverage is a bit more defensive. They bend over backwards to leave open some possibility of free will, invoking the hoary old "maybe free will acts as a veto that lets us stop the unconscious decision." But that's bogus, that's recursive: if consciousness only occurs in the wake of subconscious processing (and how could it be otherwise? How can we think anything before the thinking neurons have fired?), then the conscious veto will have the same kind of nonconscious precursors as the original intent. And since that information would be available sooner at the nonconscious level, it once again makes more sense to leave the pointy-haired boss out of the loop entirely.

But I'm going to take a step back and say that everyone here is missing the point. Neither this study nor Libet's really addressed the question of free will at all. Neither study asked whether the decision-making process was free; they merely explored where it was located. And in both cases, the answer is: in the brain. But the brain is not you: the brain is merely where you live. And you, oh conscious one, don't make those decisions any more than a kidney fluke filters blood.

(Oh, and I've figured out who the Final Cylon is. For real this time. Romo Lambkin's cat.)

Labels: , ,

Thursday, March 27, 2008

Your Brain is Leaking

This punch-happy little dude has been all over the net for the past week or so: easily the world's coolest crustacean even before then, insofar as how many lifeforms of any stripe can bash their furious little claws through the water so fast (accelerating at over 10,000G!) that the resulting cavitation bubbles heat up to several thousand degrees K? If their ferocious little chelipeds don't take you out, the shockwave alone will shatter you (well, if you're a piece of mantis-shrimp prey, at least).

The reason for their recent fame, though, is this paper in Current Biology, reporting that — alone of all the known species on the planet — these guys can see circular polarised light. And that's just the latest trick of many. These guys see ultraviolet. They see infrared. They can distinguish ten times as many visible-light colors as we can (still only 100,000 — which you'd think would at least shut up those Saganesque idiots from Future Shop who keep blathering about the millions and millions of colors their monitors can supposedly reproduce). Each individual eye has independent trinocular vision. Mantis shrimp eyes are way more sophisticated than any arthropod eye has any right to be.

But what really caught my attention was a line in this Wired article (thanks to Enoch Cheng for the pointer):
"One idea is that the more complicated your sensory structure is, the simpler your brain can be... If you can deal with analysis at the receptor level, you don't have to deal with that in the brain itself."
Which is almost as cool as it is wrong. Cool because it evokes the image of alien creatures with simple or nonexistent brains which nonetheless act intelligently (yes, I'm thinking scramblers), and because these little crustaceans aren't even unique in that regard. Octopi are no slouches in the smarts department either — they're problem solvers and notorious grudge-holders — and yet half of their nervous systems are given over to manual dexterity. Octopi have individual control over each sucker of each tentacle. They can pass a pebble, sucker-to-sucker, from arm-tip to arm-tip. Yet their brains, while large by invertebrate standards, are still pretty small. How much octopus intelligence is embedded in the arms?

So yes, a cool thought. But wrong, I think: because what is all that processing circuitry in the mantis shrimp's eyes if not part of the brain itself? Our own retinas are nothing more than bits of brain that leaked across the back of the eyeball— and if the pattern-matching that takes place in our visual cortices happens further downstream in another species, well, it's still all part of the same computer, right? The only difference is that the modules are bundled differently.

But then this artsy friend points out the obvious analogy with motherboards and buses, and how integrating two components improves efficiency because you've reduced the signal transit time. Which makes me think about the "functional clusters" supposedly so intrinsic to our own conscious experience, and the possibility that the isolation of various brain modules might be in some way responsible for the hyperperformance of savantes1.

So pull the modules apart, the cables between stretching like taffee — how much distance before you're not dealing with one brain any more, but two? Those old split-brain experiments, the alien-hand stuff — that was the extreme, that was total disconnection. But are we talking about a gradient or a step function here? How much latency does it take to turn me into we, and is there anything mushy in between?

Are stomatopod eyes conscious, in some sense? Is my stomach?

1 I would have put a link to the relevant article here, but the incompetent code over at The Economist's website keeps refusing to to open up its online back-issue pdfs until I sign in, even though I already have. Three times now. Anyway, the reference is: Anonymous., 2004. Autism: making the connection. The Economist, 372(8387): 66.

Labels: ,

Friday, March 14, 2008

In Praise of MPD

This month's New Scientist carries an opinion piece by Rita Carter, author of the imminent Multiplicity: The New Science of Personality. She's not the first to argue that multiple personalities may be adaptive (the whole backbone of the eighties' MPD fad was that they served to protect the primary persona from the stress of extreme abuse), nor is she the first to point out that MPD is just one end of a scale that goes all the way down to jes' plain folks adopting different faces for different social contexts (what Carter calls "normal multiplicity"). She does, however, suggest that "normal multiplicity could prove useful in helping people function in an increasingly complex world"; which raises the possibility that what we now think of as "pathological" multiplicity might prove useful in a hypercomplex world.

Cue the Gang of Four.

This is one of the themes introduced in Blindsight that I'm going to town on with Dumbspeech (okay, okay: State of Grace): that humanity is, in effect, splitting into a whole suite of specialized cognitive subspecies as a means of dealing with information overload. (You can see the rudiments of this in the high proportion of Aspies hanging out in Silicon Valley, perhaps.) But I've never encountered this Carter person before. Judging by her brief essay, I can't tell whether she's actually on to something or whether she's just putting neurogloss lipstick on the trivially obvious fact that it makes sense to behave differently in different situations (rather like making the Atkins Diet sound all high-tech and futuristic by describing it as "hacking the body").

Anyone here read her books? Are they any good?

Labels: , ,

Sunday, March 9, 2008

Mind Reading Technology...

...has been a staple of every low-budget piece of celluloid skiffy going back at least to that early-sixties Gerry-Anderson puppet show Stingray (which no one with any dignity will admit to having watched, although I clearly remember the episode with the mind-reading chair). The Prisoner also featured an episode in which No. 6's dreams could be probed, and the various incarnations of Star Trek must have had a half-dozen such episodes among them although they all seem to run together after awhile (the episode I'm thinking of had aliens with bumpy foreheads; does that help at all?).

Now here comes Kendrick Kay and his buddies in Nature with "Identifying natural images from human brain activity", and if they haven't actually vindicated all those cheesy narrative gimmicks, they've made a damn good first pass at it. They used fMRI scans to infer which one of 120 possible novel images a subject was looking at. "Novel" is important: the system trained up front on a set of nearly 2,000 images to localize the receptive fields, but none of those were used in the actual mind-reading test. So we're not talking about simply recognizing a simple replay of a previously-recorded pattern here. Also, the images were natural— landscapes and still-lifes and snuff porn, none of this simplified star/circle/wavey-lines bullshit.

The system looked into the minds of its subjects, and figured out what they were looking at with accuracies ranging from 32% to 92%. While the lower end of that range may not look especially impressive, remember that random chance would yield an accuracy of 0.8%. These guys are on to something.

Of course, they're not there yet. The machine only had 120 pictures to choose from; tagging a card from a known deck is a lot easier than identifying an image you've never seen before. But Kay et al are already at work on that; they conclude "it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone." And in a recent interview Kay went further, suggesting that a few decades down the road, we'll have machines that can read dreams.

He was good enough to mention that we might want to look into certain privacy issues before that happens...

Labels: , ,

Thursday, March 6, 2008

Is this theory of yours accepted by any respectable authorities?

The long-awaited new Neuropsychologia's finally on the stands, and it's a theme issue on — wait for it — consciousness! Lots of articles on blindsight, interhemispheric signaling, anosognosia, all that cool stuff. And nestled in the heart of this month's episode is a paper by David Rosenthal entitled "Consciousness and its function".

Guess what. He doesn't think it has any.

From the abstract:
"...a number of suggestions are current about how the consciousness of those states may be useful ... I examine these and related proposals in the light of various empirical findings and theoretical considerations and conclude that the consciousness of cognitive and desiderative states is unlikely to be useful in these or related ways. This undermines a reliance on evolutionary selection pressures in explaining why such states so often occur consciously in humans."
Rosenthal's conclusion? Consciousness is just a side-effect, with no real adaptive value. And no, he didn't cite Blindsight. But we all know I went there first.

Somewhere else I went, back in 1991, has been making a few online waves over the past week or two: this brief Science article by Christner et al, suggesting that microbes play a major and hitherto-unsuspected role in shaping the world's weather. As Jeremy Ruhland pointed out a few days back, this is a wee bit reminiscent of a story I wrote in the early nineties — a post-environmental-apocalypse number in which vast colonies of cloud-dwelling weathermongering microbes had conspired to kick our asses. For a few years now I've been showing this slide whenever I want to make the point that sometimes you can hit the bullseye even when you have no fucking clue what you're talking about...

... because really, "Nimbus" was a spontaneous, unresearched brain fart based entirely on an old girlfriend's observation that "Ooh, look at those clouds... they almost look alive!" But CNN is not exactly the most prestigious source of scientific intel on the planet, and besides, Moffet was just starting to look back in 2002; he hadn't actually found anything. That was then; this is now. You can't get more prestigious than Science (well, unless you're Nature), and now we're looking at actual data.

Of course, this is nowhere near the cozy conjunction of Watts and Rosenthal. Christner et al. didn't even look at clouds per sé, only at the precipitation that had dropped out of them. And it's not like they discovered any new and alien microbes; mostly they came up with plant pathogens. (Also, my microbe-infested clouds had a kind of slow intelligence to them — and if we ever get any evidence supporting that conceit I'll eat my cats.) But what they did show was that microbes affect the weather— and at the very least, that leaves the door open for all sorts of evil, powerful, yet-to-be-discovered bugs lurking overhead.

I like that thought.

Labels: , , ,

Wednesday, November 21, 2007

The End of Art

This whole stem-cell breakthrough is certainly worth keeping track of, but not here because you know about it already; it's all over other sites far more popular than mine. Ditto the hilarious perspective on WoW which serves as the subject of today's visual aid, starring characters which many of us must know (albeit in roles with more contemporary fashion sense). No, today I'm going to direct your attention to neuroeasthetics, and the following question:

Have you ever seen an ugly fractal?

I haven't. I wouldn't hang every fractal I've ever seen in my living room (even during my Roger Dean phase) — but it wasn't the essential form that turned me off those iterations, it was the color scheme. And such schemes aren't intrinsic to the math; they're arbitrary, a programmer's decision to render this isocline in red and that in blue and not the other way around.

I would argue that fractals, as mathematical entities, are, well, appealing. Aesthetically. All of them. It's something I've batted around with friends and colleagues at least since the mid-eighties, and speaking as a former biologist it has a certain hand-wavey appeal because you can see how an appreciation of fractal geometry might evolve. After all, nature is fractal; and the more fractal a natural environment might be, the greater the diversity of opportunity. An endlessly bifurcating forest; a complex jumble of rocky geometry; a salt plain. Which environments contain more niches, more places to hide, more foraging opportunities, more trophic pathways and redundant backup circuits? Doesn't it make sense that natural selection would reward us for hanging out in complex, high-opportunity environments? Couldn't that explain aesthetics, in the same way that natural selection gave us* rhythm and the orgasm**? Couldn't that explain art?

Maybe. Maybe not. Because firstly (as I'm sure some of you have already chimed in), complex environments also contain more places for predators and competitors to hide and jump out at you. There are costs as well as benefits, and the latter better outweigh the former if fractophilia is going to take hold in the population at large. Also, who says all art is fractal? Sure, landscapes and still lifes. Maybe even those weird cubist and impressionist thingies. But faces aren't fractal; what about portraiture?

The obvious answer is that the recognition and appreciation of faces has got obvious fitness value too, and aesthetics is a big tent; nothing says "art" can't appeal to the fusiform gyrus as well as whatever Mandelbrot Modules we might prove to have. But now along comes this intriguing little paper (update 22/11 — sorry, forgot to add the link yesterday) in Network, which suggests that even though faces themselves are not fractal, artistic renditions of faces are; that artists tend to increase the aesthetic appeal of their portraits by introducing into their work scale-invariant properties that don't exist in the original. Even when dealing with "representational" works, evidently, true art consists of fractalizing the nonfractal.

What we're talking about, folks, may be the end of art as we know it. Go a little further down this road and every mathematician with a graphics tablet will be able to create a visual work that is empirically, demonstrably, beautiful. Personal taste will reduce to measurable variations in aesthetic sensibilities resulting from different lifetime experiences; you will be able to commission a work tweaked to appeal to that precise sensibility. Art will have become a designer drug.

Way back in the early seventies, a story from a guy called Burt Filer appeared in Harlan Ellison's Again, Dangerous Visions. It is called "Eye of the Beholder", and it begins thusly:

THE NEW YORK TIMES, Section 2, Sunday June 3rd by Audrey Keyes. Peter Lukas' long-awaited show opened at the Guggenheim today, and may have shaken confidence in the oldest tenet of art itself: that beauty is in the eye of the beholder. Reactions to his work were uncannily uniform, as if the subjective element had been removed...

Filer wrote his story before anyone even knew what a fractal was. (His guess was that aesthetics could be quantified using derivatives, a miscall that detracts absolutely nothing from the story.) "Beholder" wasn't his first published work; in fact, as far as I can tell, it may have been his last. (That would be fitting indeed.) I don't know if the man's even still alive.

But if you're out there, Burt: dude you called it.

*Well, some of us.
** Ditto.

Labels: , ,

Tuesday, October 9, 2007

The View From The Left

This is an ancient review article — about ten years old, judging by the references — but it contains an intriguing insight from split-brain research that I hadn't encountered before: The right hemisphere remembers stuff with a minimum of elaboration, pretty much as it happens. The left hemisphere makes shit up. Mr. Right just parses things relatively agenda-free, while the left hemisphere tries to force things into context.

The left hemisphere, according to Gazzaniga, looks for patterns. Ol' Lefty's on a quest for meaning.

I learned back in undergrad days that our brains see patterns even where none exist; we're pattern-matching machines, is what we are. But I hadn't realized that such functions were lateralized. This hemispheric specialization strikes me as a little reminiscent of "gene duplication": that process by which genetic replication goes occasionally off the rails and serves up two (or more) copies of a gene where only one had existed before. Which is very useful, because evolution can now play around with one of those copies to its heart's content, and as long as the other retains its original function you don't have to worry about screwing up a vital piece of a working system. (This is something the creationists hope you never learn, since it single-handedly blows their whole the-mousetrap-can't-work-unless-all-the-parts-evolve-simultaneously argument right out of the water.) Analogously, I see one hemisphere experimenting with different functions — imagination, the search for meaning— while the other retains the basic just-the-facts-ma'am approach that traditionally served the organism so well.

Anyway, for whatever reason, we've got a pragmatist hemisphere, and a philosopher hemisphere. Lefty, who imposes patterns even on noise, unsurprisingly turns out to be the source of most false memories. But pattern-matching, the integration of scattered data into cohesive working models of The Way Things Are — that's almost another word for science, isn't it? And a search for deeper meanings, for the reasons behind the way things are — well, that's not exactly formal religion (it doesn't involve parasitic social constructs designed to exploit believers), but it is, perhaps, the religious impulse that formal religion evolved to exploit. Which is getting uncomfortably close to saying that neurologically, the scientific and religious impulses are different facets of the same thing.

Yes, all those mush mouthed self-proclaimed would-be reconcilers have been saying that shit for decades. I still bet you never thought you'd read it here.

But bear with. A compulsion to find meaning and order. When there is a pattern to be found, and enough usable data to parse it, the adaptive significance is obvious: you end up using the stars to predict when the Nile is going to flood its banks. If there is no data, or no pattern, you find it anyway, only it's bogus: thunder comes from Zeus, and Noah surfed a tidal bore that carved out the Grand Canyon in an afternoon. Lefty talks in metaphors sometimes, so even when it gets something right it's not the best at communicating those insights— but that's okay, because Mr. Right is just across the hall, unsullied, unspecialized, picking up the slack.

Only what if, now, we're acquiring data that Mr. Right can't handle? The Human brain is not designed to parse the spaces between galaxies or between quarks. The scales we evolved to handle extend up or down a few orders of magnitude, losing relevance at each iteration. Are things below the Planck length really, empirically more absurd than those at everyday classical scales, or is it just that brains shaped to function at one scale aren't very good at parsing the other?

Maybe this is where Lefty really comes into his own. Like the thermoregulating feather that got press-ganged, fully-formed, into flight duty, perhaps the bogus-pattern-matching, compulsive purpose-seeking, religious wetware of the brain is most suited for finding patterns it once had to invent, back before there were enough data available to justify such cosmological pretzel logic. Perhaps the next stage is to rewire Mr. Right in Lefty's image, turn the whole brain into a lateral-parsing parallel-processor. Perhaps the next stage of scientific enquiry can only be conveyed by speaking in tongues, practiced by colonies of monks whose metaphors must be parsed by the nonconscious modules of Siri Keeton and his synthesist siblinghood. Maybe the future is a fusion of the religious and the empirical.

Of course, the obvious rejoinder is: if all this late-breaking twenty-first-century data is enough to let the religious impulse do something useful for a change, why is it that religious fundamentalists are still such colossal boneheads? Why, if delusion has segued into profound insight, do half the Murricans out there still believe that the universe is six thousand years old? Why do two thirds of them believe in angels?

And the obvious answer is that, appearances notwithstanding, these people are not living in the twenty-first century at all, but the fourteenth. They walk among us locked into a cultural diving bell reeled out along the centuries, hermetically sealed, impervious to any facts or insights more recent than the spheroid Earth (or even older, in the case of at least one ignorant cow on The View). I can only wonder what would happen if somehow that brittle armor were to shatter, if all this real data were to wash over them and somehow penetrate the circuitry that informs their spastic gyrations and religious gibbering. Would they serve up a Theory of Everything? Would the rest of us recognize it if they did?

Probably no, and probably not. It's just idle speculation, smoke blown out my mind's ass. Still. Might be a story in it somewhere: the day when religion subsumed science, and It Was Good.

At least no one could accuse me of getting into a rut.

Labels: , ,

Thursday, September 13, 2007

The Skiffies...

Being the selection of a recent science item, hitherto unreported on this 'crawl, most near and dear to my heart.

Oddly, most of the items I've noticed recently seem reminiscent of my second book Maelstrom — from this tell-us-something-we-don't-know piece in the NY Times about the increasing fragility of complex technological systems to Naomi Klein's new book "The Shock Doctrine". Squinting at the news I can almost see the Complex Systems Instability-Response Authority gestating in the bowels of Halliburton; reading Klein's take on "disaster capitalism" I'm reminded of Marq Qammen's rant to Lenie Clarke about Adaptive Shatter: "...When damage control started accounting for more of the GGP than the production of new goods." Starfish may have been a more immersive novel; Blindsight may have had chewier ideas. But Maelstrom, I think, is way out front in terms of decent extrapolation.

Or there's this too-good-to-pass-up story out of Nature Neuroscience by way of the LA Times, in which a study combining button-pushing with the letters "M" and "W" showed that liberals are better at parsing novel input than conservatives, who have a greater tendency to fall into inflexible knee-jerk behaviors. (This would tend to explain, for example, how the inability to change one's mind in the face of new input can be regarded as a strength — "strong leadership" — while the ability to accommodate new information is regarded as "flip-flopping".) (Surprisingly, these findings have not been embraced by those who describe themselves as right-wing.)

But today's Skiffy has to go to this story in the Guardian, simply because it reflects so many facets of my own life (such as it is): marine mammals (in particular harbour porpoises, upon which I did my M.Sc.) are being infected by the mind-affecting parasite Toxoplasma gondii (whose genes were a vital part of "Guilt Trip" from the rifters novels, and which has been cited in this very crawl — May 6 2005) contacted from household cats (of which whose connection to mine own life you should all be aware by now).

Marine Mammals. Rifters. Cats.

No other contender comes close.

Labels: , ,

Thursday, September 6, 2007

Do-It-Yourself Zombiehood

New to me, old to the lit: a paper in Trends in Cognitive Sciences, which came out last November (just a month after Blindsight was released): "Attention and consciousness: two distinct brain processes".

Let me cherry-pick a few choice excerpts: "The close relationship between attention and consciousness has led many scholars to conflate these processes." ... "This article ... argu[es] that top-down attention and consciousness are distinct phenomena that need not occur together" ... "events or objects can be attended to without being consciously perceived."

Yes, part of me shouts in vindication, while the rest of me whispers Oh your god, please no.

It's a review article, not original research. As such it cites some of the same studies and examples I drew on while writing Blindsight. But what especially interested me was the suggestion of mechanism behind some of those results. Both Blindsight and Blog cite studies showing that being distracted from a problem actually improves your decision-making skills, or that we are paradoxically better at detecting subtle stimuli in "noisy" environments than in "clean" ones. Koch and Tsuchiya cite a paper that describes this as a form of competition between neuron clusters:
"attention acts as a winner-takes-all, enhancing one coalition of neurons (representing the attended object) at the expense of others (non-attended stimuli). Paradoxically, reducing attention can enhance awareness and certain behaviors."

I like this. It's almost ecological. Predators increase the diversity of their own prey species by keeping the most productive ones in check; remove the starfish from a multispecies intertidal assemblage and the whole neighborhood turns to mussels inside a month. This is the same sort of thing (except it happens within a single brain and therefore tastes more of Lamarck than Darwin). Different functional clusters (the different prey species) duke it out for attention, each containing legitimate data about the environment— but only the winner (i.e., the mussels) gets to tell its tale to the pointy-haired boss. All that other data just gets lost. And the static that paradoxically improves performance in such cases — white noise, or irrelevant anagrams that steal one's focus — play the role of the predator, reducing the advantage of the front-runner so that subordinate subroutines can get their voices heard.

I wonder. If we trained ourselves to live in a state of constant self-imposed distraction, could we desentientise our own brains...?

Labels: , ,

Tuesday, May 29, 2007

Anyone with half a brain could tell it.

Via Futurismic, an accessible piece from Scientific American on radical hemispherectomies, an operation which readers of Blindsight will recognise as the defining moment in the depersonalisation of the young Siri Keeton.


Sunday, May 20, 2007

How to Build a Zombie Detector

A fair number of topics jostling for attention lately: slime moulds outfitted with skittish cyborg exoskeletons, Jim Munroe's microbudget megasavvy take on nanotech, even this recent research on free will in fruit flies (which I'm wary of, but am holding off commenting upon until I've thoroughly read the original paper). And I'm in bitten-off-more-than-I-can-chew mode at the moment, so I don't have time to put all that stuff on the crawl right now. But there is one thing that struck me like a bolt from the blue (except it was actually a bolt from an e-mail server) late last night, as I was trying to clear away the e-mail backlog:

Zombie detectors.

There's this guy, allegedly named Nick Alcock, who seems to know way more than he admits to. He first ruined my morning back in March by pointing out that if vampires really needed to eat people because they couldn't synthesise gamma-Protocadherin-Y on their own, and if they needed that protein because it was so damned critical for CNS development, then women shouldn't have working brains because the gene that codes for it is located on the Y chromosome. It was a shot across the bow I could not resist; we're still going at it two months later.

One of the things we've veered into lately is the classic philosopher-wank question: if you've got a nonconscious zombie that natural selection has nonetheless shaped to blend in — to behave as though it were conscious (we're talking the classic philosopher zombie agent here, not the fast killer-zombies under discussion a few days ago) — how could you detect it? More fundamentally, why would you bother? After all, if it behaves exactly like the rest of us, then the fact that it's nonconscious makes no real difference; and if it does behave differently, then consciousness must have some impact on the decision-making process, findings about after-the-fact volition notwithstanding. (The cast of Blindsight mumble about this dilemma near the end of the book; it's basically a variant on the whole "I know I'm conscious but how do I know anyone else is" riff.)

So this Alcock dude points out that if I'm right in my (parroted) claims that consciousness is actually expensive, metabolically, then zombie brains will be firing fewer synapses and burning through less glucose than would a comparable baseline human performing the same mental tasks. And that reminded me of a paper I read a few years back which showed that fast thinkers, hi-IQ types, actually use less of their brains than the unwashed masses; their neural circuitry is more efficient, unnecessary synapses pared away.

Zombie brains run cooler than ours. Even if they mimic our behavior exactly, the computational expense behind that behavior will be lower. You can use an MRI to detect zombies!

Of course, now Nick has turned around and pointed out all the reasons that would never work, because it is his sacred mission in life to never be satisfied. He's pointing out the huge variation in background processing, the miniscule signal one would have to read against that, the impossibility of finding a zombie and a sentry (trademark!) so metabolically identical that you could actually weed out the confounds. I say, fuck that. There are places where the conscious and subconscious minds interface: I say, look at the anterior cingulate gyrus (for example), and don't bother with crude glucose-metabolism/gas-mileage measures. There's gotta be some telltale pattern in there, some trademark spark of lightning that flickers when the pointy-haired boss sends a memo. That's what you look for. The signature of the ball and chain.

Of course, it won't be enough for this Alcock guy. He's bound to find some flaw in that response. He always does.

Maybe I just won't tell him.

Labels: ,

Thursday, May 10, 2007

The Uplift Protein

Neuropsin, that is. A prefrontal-cortex protein involved in learning and memory. There's this one variant that's peculiar to us Humans, 45 amino acids longer than the standard model handed out to other primates, and a team of Chinese researchers have just nailed the gene that codes for it. And the really cool part? Utterly ignoring all those some-things-man-was-not-meant-to-know types, they spliced the causal mutation into chimpanzee DNA, which then started to synthesise the type-II variant. No word yet on how far they let that stretch of code iterate. No word on how many months away we are from building chimps with human-scale intelligence.

The actual paper isn't out yet. But I'm really hoping my U of T library access is still active when Human Mutation prints the details.

Labels: , ,

Saturday, May 5, 2007


You may have seen this already. It's been out for a few days now. And at first glance it's nothing special: technology controlled by brainwaves through an ECG electrode interface, which is so far behind the cutting edge that you'll be finding it in games before the end of the year. But check out this quote as to why, exactly, the military would even want to develop brain-activated binoculars:

The idea is that EEG can spot "neural signatures" for target detection before the conscious mind becomes aware of a potential threat or target ... In other words, like Spiderman's "spider sense," a soldier could be alerted to danger that his or her brain had sensed, but not yet had time to process.

So. Another end run around the prefrontal cortex in the name of speed and efficiency. I'm telling you, nobody likes the pointy-haired boss these days...

Labels: ,

Wednesday, May 2, 2007

Consciousness, Learning, and Neurochips

I'm starting this new post both to take the weight off the old one (which is growing quite the tail-- maybe I should look into setting up a discussion forum or something), and also to introduce a new piece of relevent research. Razorsmile said

Conscious trains the subconscious until it is no longer needed..

And then Brett elaborated with

that could be how concious thought is adaptive. It doesn't do anything even remotely well, but it can do anything. It is the bridge between something you've never done before and something that you do on skill. which I'll say, sure, that's certainly how it seems subjectively. But I have three flies to stick in that ointment:

1. Given the existence of consciousness to start with, what else could it feel like? Supposing it wasn't actually learning anything at all, but merely observing another part of the brain doing the heavy lifting, or just reading an executive summary of said heavy lifting? It's exactly analogous to the "illusion of conscious will" that Wegner keeps talking about in his book: we think "I'm moving my arm", and we see the arm move, and so we conclude that it was our intent that drove the action. Except it wasn't: the action started half a second before we "decided" to move. Learning a new skill is pretty much the same thing as moving your arm in this context; if there's a conscious homunculus watching the process go down, it's gonna take credit for that process -- just like razorsmile and brett just did-- even if it's only an observer.

2. Given that there's no easy way to distinguish between true "conscious learning" and mere "conscious pointy-haired-boss taking credit for everyone else's work", you have to ask, why do we assume consciousness is essential for learning? Well, because you can't learn without being con--

Oh, wait. We have neural nets and software apps that learn from experience all the time. Game-playing computers learn from their mistakes. Analytical software studys research problems, designs experiments to address them, carry out their own protocols. We are surrounded by cases of intellects much simpler than ours, capable of learning without (as far as we know) being conscious.

3. Finally, I'd like to draw your attention to this paper that came out last fall in Nature. I link to the pdf for completists and techheads, but be warned— it's techy writing at its most opaque. Here are the essential points: they stuck neurochips into the brains of monkeys that would monitor a neuron here and send a tiny charge to this other neuron over there when the first one fired. After a while, that second neuron started firing the way the first one did, without further intervention from the chip. Basically, the chip forces the brain to literally rewire its own connections to spec, resulting in chages to way the monkeys move their limbs (the wrist, in this case).

They're selling it as a first step in rehabilitating people with spinal injuries, impaired motor control, that kind of thing. But there are implications that go far further. Why stop at using impulses in one part of the brain to reshape wiring in another? Why not bring your own set of input impulses to the party, impose your new patterns from an outside source? And why stop at motor control? A neuron is a neuron, after all. Why not use this trick to tweak the wiring responsible for knowledge, skills, declarative memory? I'm looking a little further down this road, and I'm seeing implantable expertise (like the "microsofts" in William Gibson's early novels). I'm looking a little further, and seeing implantable political opinions.

But for now, I've just got a question. People whose limbs can be made to move using transcranial magnetic stimulation sometimes report a feeling of conscious volition: they chose to move their hand, they insist, even though it's incontrovertible that a machine is making them jump. Other people (victims of alien hand syndrome, for example) watch their own two hands get into girly slap-fights with each other and swear they've been possessed by some outside force-- certainly they aren't making their hands act that way. So let's say we've got this monkey, ad we're rewiring his associative cortex with new information:

Does he feel as if he's learning in realtime? Can he feel crystalline lattices of information assembling in his head (to slightly misquote Gibson)? Or is the process completely unconscious, the new knowledge just there the next time it's needed?

I be we'd know a lot more about this whole consciousmess thing, if we knew the answer to that.

Labels: ,

Friday, April 27, 2007

Blindsight (the malady, not the book): better than the other kind?

Now here's a fascinating study: turns out that victims of blindsight can see better than so-called "healthy" individuals. At least, one fellow with a patchy version of the condition was able to detect subtler visual cues in his blind field than in his sighted one. (Here's the original paper: here's a summary.) This suggests that certain "primitive" traits in our neurological evolution didn't so much disappear as get ground beneath the boots of more recent circuitry, and that — once released from those Johnny-come-lately overlays — they come off the leash. And primitive or not, they're better than what came after.

Or in other words, once again, the reptile brain could really shine if the pointy-haired homunculus would just get the hell out of the way.

I wrote a story back in the nineties with a similar punchline — that the hindbrain was still alive in its own right, still potentially autonomous, and that only after the neocortex had died was it able to wake up, look around, and scream in those last brief moments before it too expired. But now I'm thinking I didn't go far enough — because after all, who's to say the reptile brain has to die when the upper brain does? I mean sure, we've got the Terry Schiavos and the other fleshy rutabagas of the world, clusters of organs and bed sores on life support. But we've also got the schizophrenics, who hear voices and won't meet our eyes and whose frontal lobes are smaller than most would consider normal. And most frighteningly of all, we've got these other folks, people with heads full of fluid, mid- and hindbrains intact, cerebra reduced to paper-thin layers of neurons lining the insides of empty skulls — wandering through life as engineers and schoolteachers, utterly unaware of anything at all out of the ordinary until that fateful day when some unrelated complaint sends them into an MRI machine and their white-faced doctors say, Er, well, the good news is it can't be a brain tumor because...

There's a range, in other words. You don't need anywhere near a complete brain to function in modern society (in fact, there are many obvious cases in which having a complete brain seems to be an actual disadvantage). And in a basic survival sense, the ability to write and appreciate the music of Jethro Tull and do other "civilised" things aren't really that important anyway.

So now I'm thinking, tewwowist virus: something engineered to take out higher brain functions while leaving the primitive stuff intact. Something that eats away at your cognitive faculties and lets your inner reptile off the leash, something that strips your topheavy mind down to its essentials, something that speeds your reflexes and cranks your vision even as it takes the light from your eyes.

I'm thinking zombies. Not the shuffling Romero undead or the sentient philosopher's metaphor, not even the drug-addled brain-damaged pseudoresurrectees of the real-world Caribbean. I'm thinking something faster and more rigorous and more heartbreaking, far more dangerous and far tougher to kill, and I'm thinking hey, if I can do it for vampires...

I'm also thinking of writing another book.

Labels: ,