Terry Gilliam’s Air Canada
“What happens to the very concept of a war crime when every massacre can be defined as an industrial accident?”
—“Collateral”
“It’s not our mistake!”
—Sam Lowry, Brazil
Being who I am, I tend to portray my futures in the spirit of Orwell’s Nineteen Eighty Four or Brunner’s The Sheep Look Up. Sometimes, though, reality turns out more like Gilliam’s Brazil: just as grim, but hysterically so.
Take my short story “Collateral”: a tale that (among other things) asks about culpability for decisions made by machines. Or “Malak”, about an autonomous drone with “no appreciation for the legal distinction between war crime and weapons malfunction, the relative culpability of carbon and silicon, the grudging acceptance of ethical architecture and the nonnegotiable insistence on Humans In Ultimate Control.” Both stories are military SF; both deal with realpolitik and the ethics of multi-digit kill counts in contexts ranging from conventional warfare to mass shootings. Serious stuff—because I R Serious Writer, and these R Serious Things.
And then the real world comes along and makes the whole issue utterly ridiculous.
If you’re Canadian, you’ll know about Air Canada. They’re our sole remaining major airline, after swallowing up the competition a few decades ago. You may have seen them in the news recently for such customer-friendly acts as forcing a disabled passenger to drag himself off the plane while the flight crew stood around watching, refusing to divert to an emergency landing while another passenger was inconveniently dying in Economy, refusing to board people named “Mohammad” because their names were “Mohammad”, and coming in dead last for on-time flights among all major North American airlines. It’s none of those noteworthy accomplishments I’m here to talk about today, though; rather, I’m here to remark upon their historic accomplishment in AI. In one fell swoop they’ve leapt over the fears of such luminaries as Geoffrey Hinton and David Chalmers, who opine that AI might become dangerously autonomous in the near future.
Air Canada has claimed, in court, that they’ve created a chatbot which is already an autonomous agent, and hence beyond corporate control.
It’s a bold claim, with case history to back it up. Back in 2022 one Jake Moffat, planning a flight to attend his grandmother’s funeral, went online to inquire about bereavement fares. Air Canada’s chatbot helpfully informed him that he could apply for the bereavement discount following the trip; when he tried to do that, his claim was denied because bereavement rates couldn’t be claimed for completed travel. When Moffat presented screen shots of Air Canada’s own chatbot saying the exact opposite, Air Canada politely told him to fuck off.
So Moffat sued them.
The case played out in Small Claims Court, over such a trifling sum (less than a thousand dollars) that it must have cost the airline far more to defend their position than it would have to simply fork over the money they owed. But this wasn’t about money: this was apparently a matter of principle, and Air Canada puts principle above all. They made the case that they weren’t responsible for erroneous chatbot claims (what those in the know might call “hallucinations”) because—let me make sure I’ve got this right—
Ah yes. Because the chatbot was “a separate legal entity that is responsible for its own actions.”
Apparently the Singularity happened, and Air Canada’s attorneys were the only ones to notice.
The judge, visionless Luddite that he was, didn’t buy it for a second. No word yet on whether Air Canada will appeal. But it seems strangely, stupidly appropriate that the momentous and historic claim of AI autonomy (dare I say sapience?) emerged not from from some silicon Cambridge Declaration, not from any UN tribunal on autonomous military drones, but from petty corporate bean-counters trying to shaft some grieving soul for $812 Canadian.
When it comes to cool, bleeding-edge tech, William Gibson once observed that “The street finds its own uses for things”. What he forgot to mention, apparently, is that at least one of those uses is “being a dick”.
Good to see that Air Canada continues to live up to its motto – “If you want to get somewhere in the worst possible way…”
Or, “We’re not happy until you’re unhappy.”
Too late to edit, but that should read:
“We’re not happy until you’re not happy.”
I’ve heard it both ways.
Excellent Psych reference intended or not. Sorry, I’m done now.
Next up, if your phone’s autocomplete/autocorrect/autostupidity feature writes something objectionable… You can take _it_ to court 😀
I mean, I get it, you’re a lawyer, you _must_ use any and all arguments no matter how absurd… but there has to be _some_ limit. Some basis in reason. Right?
…
Right? I am pressing tab why is it not tabbing out help!
Apparently, if they put a disclaimer saying “The information from our customer helpline chatbot may be incorrect.”, they’ll be covered in the future.♂️
Apparently, this website converts a “shrug” emoji to a male symbol. *shrug*
That’s nothing. WordPress also adds “https://www.rifters.com/crawl/wp-admin/” to all my links so none of them fucking work. I just had to go back and edit the code in a text processor so people wouldn’t keep getting 404s.
To be honest, shackling Wintermute so you can use it to dodge refunds is exactly the sort of myopic nonsense I’ve come to expect from large corporations. Much more realistic than trying to live forever.
Re: “being a dick” might I remind you of one Peter Rivera from “Neuromancer”? The guy with a holograpgic projector he uses to act like a complete douchebag? Guy was basically a troll 40 years before the term gained its current meaning (and 60 years before it becomes a capital offence; I hope).
Gibson got a lot of stuff wrong but what he got right, he got right.
Reminds me of that car dealership using ChatGPT and a user was able to convince it to sell a car for $1. Apparently not legally binding but I wonder if it’s truly AGI then it’s “no takesies backsies” kind of deal.
Oh, I believe this “legal” discussion has been pretty hot since years ago, especially with self-driving cars. If the robot is driving a car and the driver is at the wheel, who is responsible for accidents – the driver? The company? Nobody at all? Maybe we should detain the immaterial soul of a car and hold it accountable?
The industry is developing quickly, and new amazing tools come out every month, but they are still tools, and very much imperfect at that. These tools are supposed to be much better, maybe outperform people in certain tasks, reduce human error (without reducing humanity), but we’re not even remotely there.
But here’s a scary thing, human souls aren’t perfect at all, there’s (allegedly) no quality control department that can even attempt to run checks on that. It instantly goes into political discussion, and loses all appeal to progress altogether.
What if the most pressing concern in current economic situation of human capital is not that robots will somehow catch up to humans, but rather a lot of humans themselves discovering that their bullshit jobs and bullshit education has made them useless and redundant. What a catastrophe would it be!
“but rather a lot of humans themselves discovering that their bullshit jobs and bullshit education has made them useless and redundant.”
Pretty sure we caught onto that a couple of centuries ago. As long as we keep getting paid for those bullshit jobs, most of us will be fine with the idea.
Exactly. Not everyone wants to derive the meaning of life from his work. I, for example, like my work, but i work because it pays enough to live.
If i won the lottery i wouldnt work but dedicate myself to something else entirely. Hobbies, my cats, charity work for cats etcpp.
Now the more interesting problem for me is: When (not if) we have outsourced pretty much all paid work to machines, with a small overseer caste of specialists, who exactly will buy all the nifty stuff the factories churn out? How long will capitalism last without a consumer class?
That’s one of the big contradictions of terminal-stage capitalism. Techbros, “free market absolutists”, and their nutswingers don’t really seem to be able to wrap their minds around the concepts of “money” and “wealth”, save for the foggy notion that “we want more” (or “we’re envious of those who have more”, in the nutswingers’ case).
Also known as the “Galt’s Gulch Gibberish” theory.
Abundance is not a problem at all, see “The Midas Plague”. Scarcity of the stuff that you need (like in the Soviet Union etc), that is a real problem.
And that’s the most optimistic scenario of near future!
How dare you, sir! This is our national airline you’re talking about. In its defense:
Actually, nevermind. It’s a bad airline run by idiots. That they went to court over information given by their own website – sorry, by their sentient AI – is just fucking unbelievable. I’d love a transcript of legal’s discussion leading to that decision, especially the part where they drill down into how, while on the one hand the bereavement angle was certain to make this national news, on the other hand they could save a 6 billion dollar company $812.
In their defense, you can fire a human, but they’re stuck with the tech, and if it really is sentient, maybe they’re rightly afraid.
My guess is that the legal team asked the chatbot whether they should settle or take it to court.
Ah, that’s probably what happened. It would explain the decision, and be true to form.
“In their defense, you can fire a human, but they’re stuck with the tech”
To mangle a paraphrase from Ian McDonald, “machines can’t be punished, only people can”. He writes quite a bit about the implications and (ab)uses of fully sentient AI.
“What he forgot to mention, apparently, is that at least one of those uses is “being a dick”.”
Well, it could be argued that he put out a couple of novels that were essentially about using tech to be a dick.
Talking about “Malak” – does it actually add much to “Watchbird”?
[…] Terry Gilliam’s Air Canada […]
It’s quite an argument for the company to be making: This chatbot is a separate legal entity responsible for its own actions, and we’ve intentionally given it access to our systems to allow it to use those systems to commit fraud, which makes us an accessory.
Arrest that AI! And lock him away!
To quote John Scalzi, “Fuck-Fuck-Fuck-Fuckity-Fuck!”
I wish I’d known about this before it went to court. To wit, a plan:
NB, security researchers do stuff like this all the time to (for example) get ‘safe’ chatbots to describe how to 3D-print a handgun.
GPT3, GPT4, Claude, Gemini – and all other chatbots run by grownups with money at stake – have big fat disclaimers at the bottom of every input prompt saying “answers may be wrong.”
The companies that run these GenAI/LLM chatbots go out of their way to make it clear that their bot is not a mouthpiece for company policy – for exactly the reason illustrated above.
But, hey, some dicks at Air Canada think they know more about GenAI and LLMs than OpenAI, Google, Meta, and Anthropic do. Heh.
The real tragedy here is a missed opportunity to profit from the punishment and suffering of the dicks at Air Canada by exploiting their cruelty/stupidity in the most ironic way possible.
A_lexx
Dear Peter, Thank you for your Creativity: The question of war: Putin’s elections are coming, mobilization is coming, I will remain a man if I refuse. Braizil and Terry Gilliam – thanks