Meta2
“Get used to disappointment.”
—The Dread Pirate Roberts
First, the PSA: Yeah, Freeze-Frame has evidently made the finals for the Campbell. Given its cohabitation with nine other worthy finalists, I’m not holding my breath. Realistically, I expect FFR will not win the Campbell a full day before it doesn’t win the Locus. On the plus side, it has already won something called the Nowa Fantastyka Award for Best Foreign Novel over in Poland, an honor of which my Polish publishers have, oddly, yet to inform me (I only found out about it while egosurfing). I’m told they took the trophy home, though.
The Poles. They never let me down.
But it is none of these things that I mainly write about today. Today I’m focusing on a whole other species of tribute, and it involves AI.
*
Back when I was doing research for “The Wisdom of Crowds”, I poked around amongst various articles on deep learning and textbots. These included Sam Gallagher’s recent Ars Technica piece, which introduced me to OpenAI’s GPT-2: a textbot which devours the souls of FDA reports and Clinton speeches and Amazon product reviews, and channels it all back into output running the gamut from uncanny—
According to a study published by the Institute of Medicine, an estimated 400,000 people die from transfusions every year, mostly due to an array of diseases, from HIV infection to Type 2 diabetes. At age 24, nearly 60 percent of these deaths are caused by transfusions, even though there is a significant genetic and physical impairment which results in over-fatal events such as heart attacks, stroke or stroke-related strokes.1
—to downright Trumpian—
GOAT DICK-IN-THE-BOY BOY
GREAT BOY
GREAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT- GOAT
GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT-GOAT2
—to somewhere in between:
Kim Jong-un, the leader of North Korea and most closely aligned with the United States, has warned of an imminent U.S. attack. The test of a hydrogen bomb Thursday killed 13 people and injured several others in a Pyongyang explosion, the country’s state TV station reported.<|endoftext|>Coconut Cream
Put aside the false claims of H-bombs in Pyongyan et al. If stating falsehoods was enough to fail a Turing Test, you’d be able to count the entire sapient population of the Internet on your fingers and toes— and besides, the whole point of deepfakery is to sheathe lies in an aura of verisimilitude. Give a pass also to that Coconut-cream glitch and its kin, since the Ars Technica output was generated not by GPT-2 itself but by a lobotomized variant running a mere 117 million parameters; the fully-fledged program (which handles 1.5 billion parameters) isn’t nearly so likely to commit such obvious mistakes. (That’s the very reason why Ars didn’t have access to it, in fact: OpenAI has refused to release it because it’s too good, could too easily be used for nefarious purposes.)
These deep-learning text-generating algos are getting asymptotically close to real-world iterations of Searle’s Chinese Room. So it was probably only a matter of time before someone, in an act of supreme metaness, applied one of them to Blindsight.
“Someone” turns out to be Cody Raskin, an astrophysicist working out of Lawrence Livermore. He fed Blindsight into a Google deep learning network and told it to write a sequel. Total output weighs in at almost 1400 words and is worth checking out in its entirety, but here are a few excerpts for the time-constrained:
I thought as Sarasti fell back on old tricks and countermeasures: I pushed out an arm from the shuttle and watched it slacken, the tendrils trailing like the tail of a fish behind the rack. I heard him grunt.
The shuttle was aloft now. It swung quickly down the spine and disappeared. A low clatter to one side had the echo of a thousand arms flexing before my gaze. A second slipped by, almost imperceptible, like the heartbeat of a mosquito on a wire.
One of the arms flared and split open. Rorschach resisted my reflexive gaze; I had to keep my gaze fixed on my post-capture host in order to register Rorschach moving around the drum like some sort of living insect.
…
“The idea that the brain doesn’t create information is an illusion. It’s like telling me that my eyes are not reflecting to my brain, and that’s not what it’s doing, which is to point out that there’s some subtle but significant difference between seeing and knowing.”
“You don’t think that’s exactly the kind of thing the brain determines, do you? Do you?” she said.
“I’m really going out on a limb here. The details come from the study of the behavior of the neurons, not from some fancy theoretical description of the brain.”
“How so?”
“Because when you watch what’s going on beneath the surface, the brain has to balance all of the sensory information, and when your eyes aren’t looking at the same thing over and over and over—well, that’s not so simple, is it?” She smiled at him. “It all looks very complicated to us, but what’s happening in the brain’s head is not that simple to the eyes. What’s interesting about the brain is that it doesn’t really know.”
“But it does.” Bates agreed. “The way you know is the way you do.”
…
Not that Sarasti believed the story. But he knew a lot more now than he had then. That was perhaps even more important than his own ability. He was aware that I was listening; at least of what he said, of what he really had to say, he knew that I was watching.
But I always knew and was aware, and knew something.
I knew what I was thinking. And I knew what I was saying.
He had known the answer to that question for several hours, and he had had it all figured out for the other eleven, if I was being honest. He had known my feelings at that moment; he had known the consequences if I was right; he had also known precisely how long I would have to play the game; and at this point the consequences were so plain that I didn’t even have to guess them.
I knew that this was all for nothing—that Sarasti would just find another victim.
I never had a chance, though. I was so close. I was so goddamn far away.
I was at the bottom of the mountain.
I was sitting in this chair.
There are a few predictable reactions to all this. An inevitable contingent will hold that this AI-generated content is significantly better than the real sequel to Blindsight— and because I’m not entirely unsympathetic to that point of view, I suggest we all pause a moment to let those folks get that out of their system.
Another response is to be spooked out by the style. It really does rather sound like me; and there’s an undeniable lilt, a rhythm to words that somehow lulls you into thinking they make sense even when they don’t. Cody calls it a “jabberwocky” quality: “you get the sense that it’s saying something, and images are certainly formed in your mind, but you can’t quite pin down what’s actually happening.” It fascinates me, this sense of meaning without substance. I’d almost call it a metaphor for the answers career politicians give to sticky questions: glib, eloquent, somehow reassuring until you try to parse the actual meaning behind the words and fail to find any. But I can’t quite call it metaphor, because it seems too damn close to the mark for mere analogy. I suspect that speech-writers use pretty much the same algorithms these textbots do.
But what’s haunting me right now is temptation. Because while applying a Chinese Room to a book about Chinese Rooms is deliciously meta, we can push it further. I am, after all, plotting out a third and final volume in the Blindopraxia sequence— and at least part of that novel is likely to tangle with the dissolution of consciousness on the part of certain characters. It’s a process which might be well represented by the sort of stream-of-nonconsciousness put out by neural nets channeling the words of the conscious.
Right now I can’t think of anything cooler than getting an AI to generate at least some elements of Omniscience. I have no idea if I could make it work— logistically or thematically— but we’d need to come up with some new word for the result.
“Meta” would only get us halfway.
1 Based on text from an AT article on blood transfusions
2 Which was, allegedly, based on text from an actual Trump speech.
This is pretty spooky. The dialogue part is only slightly nonsensical.
“The way you know is the way you do” – recommend that you copyright this and sell it to Deepak Chopra.
You’re probably on to something with speech-writers and textbots. Politicians and other public figures are often accused of plagiarizing each other. Could be that the writers use a program to extract this “sense of meaning without substance” from memorable/historical speeches, and from time to time entire phrases make it through unchanged.
Only 100% pure Watts, please. AI need not apply.
“Cody calls it a “jabberwocky” quality: “you get the sense that it’s saying something, and images are certainly formed in your mind, but you can’t quite pin down what’s actually happening.” ”
I thought Can Xue’s Frontier was brilliant. I didn’t know wth was going on the whole time I was reading it. Somehow it hit me in the unconscious without making any sense to my conscious.
What scares me most about this exercise is that it has convinced me, more than any science fiction ever has, that were an AI to possess actual, general intelligence, it might very well still sound a lot like this, but that in its almost comical stupidity, it could perform invisible feats of behavior modification on us pliable mortals. It struck me that the last few sentences about being close and also “so goddamn far away,” and about being at the bottom of a mountain were trope-like neuroexploitative minima you’d find in a great deal of fiction today – phrases that don’t impart any extra meaning than was already provided, but that nevertheless keep the reader’s mind circling the ideas the author wants to impart without much convincing effort. That metaphor of being at the bottom of the mountain is almost “too” enticing to pass over, and so I wonder if an AI with motive could learn to use a lot of these kinds of metaphors to cause a large population to change their opinions about something, or cause large numbers of people to have a different mood than they did before reading it, and to have precisely the mood that the AI intended them to have. We could be facing our extinction not under foot of some terminator robot, but under the oppressive neurological weight of perfect haikus.
It has an echo of your style, kind of like how a child can babble with the sound of a language, but has a ways to go before it actually says anything.
“We could be facing our extinction not under foot of some terminator robot, but under the oppressive neurological weight of perfect haikus.”
Very pertinent thought, Cody. Despite general thinking, humans are so f’ing easy to persuade, it’s scary. Not politically, maybe, but this is debatable. It almost doesn’t matter what politics you subscribe to. I hate politics, so sorry to pull that into this conversation.
My point: people are so susceptible to persuasion, it’s freaky. Me included. I try not to be, thinking peer-reviewed science-based as much as possible, but what if a deep-learning textbot grabs control on levels humans have never been challenged on, like suggestion, in the form of a non-conscious aggressor Peter himself might write, not knowing why it’s doing it. Or caring.
Hope FFR wins both awards, Pete! Color me delusionally optimistc.
And textbot contributions or not, I can’t wait to read Omniscience.
You may want to look up Gwern’s experiments with GPT-2, which have produced some legitimately good poetry.
Wow, that machine DOES mimic your style. Badly, but better than, for example, i would be able to. That is pretty eerie.
Also, i am beyond happy that you have not given up on Omniscience. For what it´s worth, i enjoyed Echopraxia every bit as much as Blindsight. Really hoping you get up to the finale, before the Ecocalypse swallows us all.
It almost feels like I am deeply offended by the lack of deeper meaning of this text – I can figure out what metaphors are coming from where, and I see places where these metaphors and connections are misaligned. But at times they are aligned just right, almost creating a sensible context. To me it looks like irregular fractal eaten out of sound and otherwise sensible narration that I expect to see here by instinct. Most of the time it’s hollow and does not constitute to any deeper meaning. Other times, my mind takes good enough grasp of the structure of a fragment to assemble some sensible context, until it breaks again. After all, I guess, the purpose of the algorithm is to take a big text and produce something similar, add “something like that” from it – and it takes the best effort to do so. With mathematical precision, so to say.
Not many times you can see atomic bombs in the making, and in fact, not many people know how actual working parts of atomic bombs look like, or how they’re made. Computer-generated texts are
all around us since time immemorial, and I remember reading an old book about purpose-built computers (probably even early PCB-based) that were writing some primitive texts long before any idea of “neuroscience” became popular. The books was from the 80-s and I opened it in the 00-s. I’ve been thinking to that for many years since then.
With tendency of humans to weaponize everything, how it is going to go nowadays? Well, there’s theory of mine I came up some years ago, and reality reluctantly follows the same path – the crisis of modern US-centered world. Probably the culmination for that can come in the middle of next decade, by the same estimation. I mean, I’ve seen some opinions that real and actual technological war has already started (following trade war), the arms race of 5G development and Chinese corporations bans. And given the tempo of development in that direction you can easily expect some sort of subversion AI algorithms to be applied in practice by that time on large scale. The problem is, though, is that IMO, like atomic bombs, they do not create anything meaningful by design. At best, they will just explode and burn us out from the inside, leaving behind only charred radioactive ruins. But then again, how did it work for us last time?
Having played with the limited GPT-2 model a lot, the dreamlike quality of its output is definitely a strong tell that you’re not dealing with human-written text. I’m amazed that nobody has yet used it to write an experimental novel – maybe a human could write the first sentence of each page or paragraph to keep the narrative vaguely on track.
For some reason, “One of the arms flared and split open” is the you-est output string.
A quick search turned up an AI that generates short reader reviews, here for Yelp but would seem easily adaptable to Amazon books:
https://mag.uchicago.edu/science-medicine/artificial-intelligence-meets-yelp#
Now can someone can locate an AI that does literary critiques? If so, Omniscience could be offered to publishers as the first AI written, AI reviewed, and AI read science fiction book. Gotta be worth some marketing points 🙂
The next step – after generating this book – would be using the same tool for reviews, amazon comments, crawl announcements, crawl comments etc etc; like a white collar moving from counting stuff in Excel to running algos in R or Python or whatever you could just take the back seat and relax.
I reflexively pair word salad with Trump’s voice these days, if you were looking for a way to make it even more disturbing.
C’mon. That’s Trump. I can’t not hear it now.
Heck yeah, download your own ghostwriter
A quick search didn’t turn up any results, so hopefully you haven’t mentioned it in the past… but this definitely reminded me of Sunspring (which, at times, has that “jabberwocky” quality, but maybe because it’s _performed_ by humans, at times also felt… moving?)
Random notion. Write a story, train an AI on it, then delete the original. Let everyone download a different AI remix.
As a way to attract an additional group of fans who did not like the original work) Why not?
I, of course, a fan of AI and all that. But knowing how people raise their children, I can say that the death of humanity from the greenhouse effect or just a nuclear war is much more likely than from the “hands” of an AI like SkyNet. For now, all attempts by AI and other ANN resemble begging raccoons. And yet, to some, raccoons seem to be the most intelligent animals on the planet. After the dolphins, of course. )
Paul Harrison,
Damn it, I fucking love this idea. But don’t delete the original please!
Hell, as an even more interesting version of it: Imagine being able to give and share procedural keycodes to the story. So you could read the original version, then your own unique version, and then the versions other people chose to share.
Imagine debates over which iteration was better you could have.
Bahumat,
I also would be very interested in reading the original, but writing is not always about giving the reader what they want. Also consider, if the original was never released, Peter Watts could write something truly transgressive without fearing the condemnation of society. Anything that leaked through could just as well be the random invention of the AI.
Looks like it’s time for an upgrade to the Postmodernism Generator. Extra points for getting the output accepted by first-tier journals.
I like my writers human and my books human written, thanks very much. Technology, in my view, should serve humanity and not replace it. And yes, I do think there are certain things with which humans should not mess. But not going fast. I like going fast. 🙂
The K,
Big +1 on this. I come here from time to time just to see if there’s any news or hints of a sequel to these mind-bending books. The Rift and Stargate series are good, but the combination of uniqueness, macabre, and believability in the Blindsight series is unmatched.
Specifically, this thing sounds like something you’d use as a social networking campaign to spam out your message in hopes of confusing and confounding real people with real votes and wallets. Right now, they are crude and fairly obvious: no avatars, identical screeds, all saying how much they love the new product or hate Killary Klinton. The next generation will have procedurally generated avatars with procedurally generated profiles with slightly different rants about how Killary Klinton ate a child’s face.
Even more specifically, this tactic can’t and won’t be used to bring about actual change, their purpose is to keep people distracted and demoralized and confused so it would be impossible to ever effect change.
Creepy and interesting….it can imitate specific types of writing, apparently. The guy who wrote the Ars Technica piece is Sean Gallagher. I will tell Sam that he has started moonlighting as a tech writer, though…
““Meta” would only get us halfway.”
If it’s a circle coming around to meet itself, then the whole should be ‘metatem’, although I guess if you go around the other way it might be ‘atemeta’.
Possible responses to this:
1) Wow, kinda eerie similarity to my writing there. Maybe my writing is more meaningless than I thought.
2) Hmm, kinda similar to my own writing in places. If they keep working on it, maybe they can make it better at simulating meaning.
3) Hey, I should get this thing to write (part of) my books!
The conclusion that I draw from this, and from observing consumer behavior, including my own, is that there is a huge market for meaninglessness. Structured emptiness. Fascinatingly complex nonsense. Random shit. Drugs. Soda pop. Whatever. Anything, as long as it doesn’t mean anything. Anything but reality.
Obviously, I guess. It’s called fiction because it’s not true. Then there is science, which basically means trivia. Looking really really hard at rocks and things, and going on and on about how important it is. Then of course there’s science fiction. And programmers using ridiculously elaborate rocks to write nonsense.
In the future, Alexa tells you the secrets of life, the universe, and everything. At least, that’s what you thought it was at the time. Maybe it’s just fun to listen to. Actually, no one knows why they do it. They just keep doing it.
Hey, pay attention to me. Hey. I’m playing with rocks. Hey. Hey.
Is anyone home? All the lights are out.
There’s no one here any more.
Just some rocks.
Cody Raskin,
Not unlike the nam-shub of Enki
Yeah, it would be quite legit, if some AI-character’s speech in the book would be at least partly AI-generated.
What strikes me, is that my dreams at night seem to be constructed by some similar algorhythms.