Consciousness, Learning, and Neurochips
I’m starting this new post both to take the weight off the old one (which is growing quite the tail– maybe I should look into setting up a discussion forum or something), and also to introduce a new piece of relevent research. Razorsmile said
Conscious trains the subconscious until it is no longer needed..
And then Brett elaborated with
that could be how concious thought is adaptive. It doesn’t do anything even remotely well, but it can do anything. It is the bridge between something you’ve never done before and something that you do on skill.
…to which I’ll say, sure, that’s certainly how it seems subjectively. But I have three flies to stick in that ointment:
1. Given the existence of consciousness to start with, what else could it feel like? Supposing it wasn’t actually learning anything at all, but merely observing another part of the brain doing the heavy lifting, or just reading an executive summary of said heavy lifting? It’s exactly analogous to the “illusion of conscious will” that Wegner keeps talking about in his book: we think “I’m moving my arm”, and we see the arm move, and so we conclude that it was our intent that drove the action. Except it wasn’t: the action started half a second before we “decided” to move. Learning a new skill is pretty much the same thing as moving your arm in this context; if there’s a conscious homunculus watching the process go down, it’s gonna take credit for that process — just like razorsmile and brett just did– even if it’s only an observer.
2. Given that there’s no easy way to distinguish between true “conscious learning” and mere “conscious pointy-haired-boss taking credit for everyone else’s work”, you have to ask, why do we assume consciousness is essential for learning? Well, because you can’t learn without being con–
Oh, wait. We have neural nets and software apps that learn from experience all the time. Game-playing computers learn from their mistakes. Analytical software studys research problems, designs experiments to address them, carry out their own protocols. We are surrounded by cases of intellects much simpler than ours, capable of learning without (as far as we know) being conscious.
3. Finally, I’d like to draw your attention to this paper that came out last fall in Nature. I link to the pdf for completists and techheads, but be warned— it’s techy writing at its most opaque. Here are the essential points: they stuck neurochips into the brains of monkeys that would monitor a neuron here and send a tiny charge to this other neuron over there when the first one fired. After a while, that second neuron started firing the way the first one did, without further intervention from the chip. Basically, the chip forces the brain to literally rewire its own connections to spec, resulting in chages to way the monkeys move their limbs (the wrist, in this case).
They’re selling it as a first step in rehabilitating people with spinal injuries, impaired motor control, that kind of thing. But there are implications that go far further. Why stop at using impulses in one part of the brain to reshape wiring in another? Why not bring your own set of input impulses to the party, impose your new patterns from an outside source? And why stop at motor control? A neuron is a neuron, after all. Why not use this trick to tweak the wiring responsible for knowledge, skills, declarative memory? I’m looking a little further down this road, and I’m seeing implantable expertise (like the “microsofts” in William Gibson’s early novels). I’m looking a little further, and seeing implantable political opinions.
But for now, I’ve just got a question. People whose limbs can be made to move using transcranial magnetic stimulation sometimes report a feeling of conscious volition: they chose to move their hand, they insist, even though it’s incontrovertible that a machine is making them jump. Other people (victims of alien hand syndrome, for example) watch their own two hands get into girly slap-fights with each other and swear they’ve been possessed by some outside force– certainly they aren’t making their hands act that way. So let’s say we’ve got this monkey, ad we’re rewiring his associative cortex with new information:
Does he feel as if he’s learning in realtime? Can he feel crystalline lattices of information assembling in his head (to slightly misquote Gibson)? Or is the process completely unconscious, the new knowledge just there the next time it’s needed?
I be we’d know a lot more about this whole consciousmess thing, if we knew the answer to that.
First, to answer your question:
I’m learning to play guitar. The “just there” sensation is something people can reproduce already with some aspects of some complex motor skills. In my particular case, I could not play chords properly, if at all. Then one day, I could. It did not feel as though I had learned something; it felt as though the task itself had become easier. Result: A clearly sounding chord. This is in contrast to my experience learning clarinet, which to my memory was a fairly continuous progression.
Another example: The Buell Blast motorcycle is particularly difficult to accelerate from a dead stop without stalling the engine, on account of it only having one cylinder. As I learned to ride it, I stalled a lot, until one morning I got it right. It felt to me at the time as though someone had come over and secretly installed an auto-clutch. But I knew the motorcycle was the same, so I reasoned after the fact that I must have learned something.
I imagine implanted skills would feel exactly the same way.
Second, why is your blog so incredibly narrow? It uses about a third of the available space on my monitor. Just look.
razorsmile sez: Hmm.
I should clarify. What I meant to type, before my hands outpaced my brain (meta-topical, no? :)) was:
Consciousness trains the subconscious until it (i.e. consciousness) is no longer needed.
Ergo, its mission statement, much like MDs and medical researchers, is to put itself out of a job.
Oddly enough, consciousness being one possible mode of learning doesn’t preclude the aconscious modes.
ar: The template our gracious host has chosen has a fixed width for its body style. Changing that is a mere moment’s CSS hackery.
And to provide another answer to your question — In what way is it relevant to know how it feels to be forced to have learned? I don’t think we can trust any self-report about consciousness, because it’s impossible to know whether that self-report is selfishness on the homunculus’ part.
Although that, itself, raises an interesting point: is consciousness less maladaptive if it is more aware of its own extraneousness? If a race of homo sapiens arose that was conscious, but recognized — understood, grokked — that it was mostly a creature of instinct, would it outcompete the rest of us in any meaningful way?
I didn’t say it guv, but I did say that I liked it.
I have the impression (I’m like warm wax when it comes to the latest hypothesis) that memory ill be highly significant in studies of consciousness, as is vision.
As with seeing, in remembering, the brain takes scraps and assembles a rich-seeming but often inaccurate picture. Reading in a recent issue of New Scientist (Mar 24), I noted in an article on memory and anticipation (“Future Recall”, pgs. 36-40) that almost exactly the same portions of the brain light up in anticipating the future as do when recalling the past. It seems that declaritive memory (episodic – “I was at work yesterday” and semantic “Paris if the capital of France”) is essentially/substantially the same as anticipation (“I will be at work this afternoon”).
Anticipation by specific moves ahead as opposed to reaction is regarded as a major indicator of intelligence – and, I guess, consciousness. As a species we’re very much obsessed with games of strategy in which we have to think ahead of the opposition, form a model of what they’re thinking (Theory of Mind) and trick them. In fact (gratuitous phrase in brackets), some theories of the evolution of language hold that we learned to speak so that we could lie.
Yes, a Chinese Room or a zombie can lie and dissimulate, but anyone who knows why the Heimlich Manoeuvre is damn handy knows also that nature isn’t always optimal. Sometimes a suboptimal premise can be used well: cephalopods have copper-based blood that is less efficient than iron-based in carrying oxygen, but it’s less viscous and flows faster.
Reading in another book, Damassio’s The Feeling of What Happens, consciousness itself is stripped down to a tiny pinpoint of awareness supported by all the apparatus of cognition that make it seem large when they are engaged, rather like the pointy-haired boss who thinks he’s smart because he has all these clever people working for him.
Anyway, vision, memory and anticipation as we experience them seem rich and highly detailled but are objectively wrong. As our patron has said in Blindsight however, the brain is not a truth-sensing machine, it’s a survival machine: all sorts of gaps can be pointed out in consciousness, but a lot of those gaps are there because they don’t need to be filled. The brain/mind gets on perfectly well on its own terms. We need not think of gaps as flaws that undermine the validity or worth of consciousness, but look at how well it works as a system building itself up from minimal points of data.
Possibly, consciousness began as a simple solution to the problem of anticipation or sensory perception and then started generating the predecessors of culture and aesthetics and because culture (in broad meaning) involves deceit, an arms race of sorts proceeded and we’re left with our hugely inflated sense of selfness now.
This of course does not rule out the appearance of swifter, smarter beings that are unconscious. New Zealand, my home, has a rich diversity of birds filling many ecological niches comparable to those filled by mammals elsewhere: we have a flightless parrot rather like a rabbit and an alpine parrot that’s very intelligent and playful (and it might well pine for the fjords if taken away)… or I should use the past tense. A lot became endangered and extinct when humans and rats and cats arrived. That analogy/point was made in Blindsight and it was well taken.
Consider further the raven. A series of experiments to test their intelligence was covered in a recent issue of Scientific American. Ravens clearly demonstrate possession of several advanced cognitive abilities, including theory of mind, advanced planning, and the attributing of knowledge and motives to particular individuals, including humans.
The article went on to contemplate why they would have such intelligence. The theory is that their general-purpose intellect (some of the experiments involved having them solve problems that a bird would never encounter in the wild, that they nonetheless completed. The most interesting part, though, is that in many cases, the ravens simply looked at the situation for a while, then performed the series of actions required to get the food in one go.) developed in response to the complex and ever changing environment presented by other thinking creatures, such as the various predators of the world that may or may not respond violently to a raven flying down to snack on its kill, and eventually, other ravens constantly trying to pull fast ones on them.
While trying to find out if the article in question as available online (it is not), I did find this other interesting bit, which says that, among bird species, larger brains correlate with higher individual survival rates and broader range.
Ah yes, the raven. Actually, the aforementioned alpine parrot, the kea, scores very highly in whatever problem-solving tests that have been tried (I can’t think of any specific links offhand). It’s smart, curious, versatile, playful, omnivorous… should do well, except that it’s habitat is limited and it hasn’t coevolved with rats and cats and while not endangered, it is listed as vulnerable.
Irene Pepperberg’s work with Alex the African grey parrot suggest formidible intelligence – on crude terms, I think that “equivalent to a five year old human” has been said.
Have you read Dougal Dixon’s “After Man”? It posits human extinction followed by fifty million years of evolution by surviving species. Descendents of rats and other “pest” rodents occupy many niches. I was a bit disappointed that he didn’t devote much attention to the birds (though the penguin-whale was pretty damn cool), particularly corvids.
I know that if you replay the evolutionary tape, nothing will repeat exactly, but if humans became extinct, then descendents of the ravens would probably be more likely to develop some kind of world-spanning technological civilisation than any of the apes or dolphins (assuming that we don’t send them into extinction before we follow). They’re widespread, versatile omnivores and scavengers, they’re tough and used to dealing with dangerous predators, they can take on specialised roles within flocks…
There’s a neat low-key short story by Brian Stableford called “The Unkindness of Ravens”, which has just-slightly-tweaked ravens with adult human-level intellects and speech. It deals with the dialogue between a genetic engineer who created the new ravens and their representative of as they negotiate an accomodation with each other.
Stableford’s ravens are quite vulnerable, in their first generations at least, but a Wellsian scenario of a “coming beast” competitor species wouldn’t be too difficult to extrapolate. We might not even notice an ubiquitous pest seeming to adapt even better to the conditions of our civilisation until there’s nothing we can do about it. Nice idea for a story perhaps. Now, vampire ravens… (cutesy fucking icon redundant)
ar asks:
Second, why is your blog so incredibly narrow?
fraxas is right– I could widen the column easily enough. I left it the way it was when I made the other changes because, well, it was a preselected template and I just assumed that the default width would be the best for a wide range of screen rezes. (It certainly looks okay on mine.) I can change it if enough people complain about the current layout. We aim to please.
Hey, frax. (May I call you Frax?)
In what way is it relevant to know how it feels to be forced to have learned? I don’t think we can trust any self-report about consciousness, because it’s impossible to know whether that self-report is selfishness on the homunculus’ part.
Point taken. But I think the subjective sensation experienced when getting knowledge implants might tell us something about how recursive the system is: is it merely accessing information, or is it modelling the process of acquisition? Or maybe not. I guess I’d just like to know.
is consciousness less maladaptive if it is more aware of its own extraneousness? If a race of homo sapiens arose that was conscious, but recognized — understood, grokked — that it was mostly a creature of instinct, would it outcompete the rest of us in any meaningful way?
Such creatures may already walk among us, if you buy some of the more recent perspectives on psychopaths. While the rest of us bend over backwards to try and rationalise our stickleback behaviors (we’re not fighting over resources, we’re “spreading democracy”; we’re not being pack animals, we’re “electing a charismatic leader”), sociopaths can’t be bothered with all that analysis; they’re just out for personal gain, and therefore have a much wider range of tools at their disposal than “ethical” people do. (I went on about this in one of the postings on the old crawl) (scroll down to Oct 14– sorry, there’s no permalink).
I doubt that such creatures reached their position through any deep introspection; they probably don’t think about such things at all, they just act. Which proves the point. And it has been argued that sociopaths are outcompeting us, both reproductively and in terms of their disproportionate accumulation in powerful professions such as politics and medicine.
brett said:
We need not think of gaps as flaws that undermine the validity or worth of consciousness, but look at how well it works as a system building itself up from minimal points of data.
That works fine as long as the gaps evolution decides it can safely “ignore” don’t change over time. But if your environment changes sufficiently, those gaps can hide monsters…
ar pointed out that
among bird species, larger brains correlate with higher individual survival rates and broader range.
That’s really interesting. Judging by the summary, they seem to have corrected for the confounds that sprang to my mind, anyway. Don’t have time to read the original paper right now, but I’ve bookmarked the reference.
brett said
Irene Pepperberg’s work with Alex the African grey parrot suggest formidible intelligence – on crude terms, I think that “equivalent to a five year old human” has been said.
I’ve always found it fascinating and counterintuitive that the nonhuman species showing the consistently greatest language skills are the relatively small-brained birds, and *not* other primates.
if humans became extinct, then descendents of the ravens would probably be more likely to develop some kind of world-spanning technological civilisation than any of the apes or dolphins
I dunno. Not to take away from any of the mental accomplishments of the corvids (which are pretty damn impressive), but the necessity for a flying creature to minimise body mass would put a pretty strict upper limit on brain size. Even if they became flightless, there are limits: brains are very expensive metabolically, and birds already have a high basal metabolic rate couplled with an inefficient oxygen-delivery system (their red blood cells are biconvex and nucleated, which reduces their effective gas-exchange area).
Not to take away from any of the mental accomplishments of the corvids (which are pretty damn impressive), but the necessity for a flying creature to minimize body mass would put a pretty strict upper limit on brain size.
If we have normal people whose brains are mostly fluid, then perhaps brain size doesn’t NEED to be so large as ours are. The problem with every part of one’s brain being used to maximum capacity, though, is that you’re have to accept the fact that if you get even a minor concussion, you are going to lose some important mental function.
Surely you remember those tool-using hominids with very small brains. So apparently there is distinction between small brains, and brains that were once large but then became small. To wit, miniaturization seems to preserve at least some functions only developed during the larger state. Perhaps ravens just bypassed all that nonsense and had the good fortune of the right selective pressures to miniaturize right from the start, resulting in a brain that gradually became more and more powerful but did not become larger because becoming larger would make it harder to fly, an unacceptable compromise at the time. I do wonder if they’re more energy-efficient for the same amount of “processor time” as well.
ar sed:
If we have normal people whose brains are mostly fluid, then perhaps brain size doesn’t NEED to be so large as ours are. The problem with every part of one’s brain being used to maximum capacity, though, is that you’re have to accept the fact that if you get even a minor concussion, you are going to lose some important mental function.
Whoa. That’s a very good point. We already know a lot of the things we considered to be human-only skills can be executed by non-sentient programs, neural nets and a wide variety of animals; all of which have smaller brains than us.
Perhaps, the price of having smaller (presumably) more efficient brains is lack of redundancy. We, homo sapiens, can canonically survive strokes, stop signs through the head and a veritable cornucopia of cognitive/visual/sensory defects with our intellect functionally intact.
*That* could be the reason the poor bastards went extinct, not because we were smarter or reproduced quicker but because our brains were tougher than theirs.
ar: “Surely you remember those tool-using hominids with very small brains. So apparently there is distinction between small brains, and brains that were once large but then became small. To wit, miniaturization seems to preserve at least some functions only developed during the larger state. Perhaps ravens just bypassed all that nonsense and had the good fortune of the right selective pressures to miniaturize right from the start…
That is a really interesting distinction. I wonder what elements were stripped away in the process of fitting the more efficient intelligent brain into its smaller casing. Consciousness, perhaps?
I’m not as ready to buy into your raven suggestion, though, because I’m thinking that the kind of out-of-the-gate efficiency of which you speak would probably be reflected in significant differences in brain architecture (or at the very least, neural density/distribution). And if these guys do have radically different architecture, it’s certainly slipped under my reading radar. (Which, granted, happens a lot these days…)