The Return of the Slow-Wave Trader

Excerpts from dinnertime conversation with a retired investment banker:

Angela Merkel emerges from a meeting with Donald Trump. “Yes,” she says in answer to a reporter’s question, “we had a provocative but productive discussion.”

She rolls her eyes.

We probably shouldn't have forgotten about this as soon as we did. Blame hyperbolic discounting.

We probably shouldn’t have forgotten about this as soon as we did. Blame hyperbolic discounting.

The market soars on hearing the good news. It soars because once again, after a brief hiatus in the wake of that scary Flash Crash of 2010, eighty percent of market trading is once again done by bots,  and bots don’t get sarcasm yet.  They only read the words they see online, take them at face value; they don’t understand that Merkel meant exactly the opposite of what she said.

We, however, do. Finally, thanks to Trump, traders who inhabit meat have a fighting chance against those who inhabit silicon. Until bots learn about sarcasm and deception, at least.

That window may be closing even as we speak. The machines are already picking up some of those nuances. Not the HFT bots with split-second memories; those are dumb fast algos, always will be.  But the deeper AIs, the nets with layered interneurons; they seem to be catching on. Trump’s tweets are already showing less of an impact on the markets than they did after his Merkel Meeting.

I asked my friend: are those tweets losing influence because the people writing the code said “Wow, Trump’s a fucking loon— we’d better go in and tweak those parameters”— or because the code itself is learning on its own?  I mean, if I was a trading net, I’d use some Bayesian thing to constantly update the weight I attributed to any given source: the more often it paid off when compared to independent data, the more seriously I’d take it next time, and vice versa. Maybe the AIs are doing that?

My friend didn’t know.

I’m hoping one of you might.



This entry was posted on Wednesday, August 16th, 2017 at 6:57 am and is filed under AI/robotics, economics. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
10 Comments
Inline Feedbacks
View all comments
Grammar T. Troll
Guest
Grammar T. Troll
7 years ago

“eighty percent of market trading is once again is done by bots”

there’s an extra “is” in there, just a heads up

Sean
Guest
Sean
7 years ago

Given that Bayesian normalization of inputs was published almost ten years ago (https://www.ncbi.nlm.nih.gov/pubmed/19065804), I would hope they’d have something better…

Johan Larson
Guest
Johan Larson
7 years ago

For neural networks there are standard techniques for adjusting the weights assigned to inputs based on the current error.

https://en.wikipedia.org/wiki/Backpropagation

Deseret
Guest
Deseret
7 years ago

Old code:

IF $Source=”@TheOnion” IGNORE $Content

New code:

IF $Source=”@TheOnion” OR “@Rea1DonaldTrump” IGNORE $Content

Wait a minute…does search…

Thought for a moment this was from ’10. It’s from ’11.

The Onion

Deseret
Guest
Deseret
7 years ago

Musk and others warning about autonomous headcheese in warfare.

Quartz Media

DA
Guest
DA
7 years ago

Deseret:
Musk and others warning about autonomous headcheese in warfare.

Musk belongs to the “bad science fiction/ Skynet scenario” wing of AI alarmism, so it’s hard to take him seriously. Reading about him getting into slap fights with Zuckerberg (pro-AI utopia, full speed ahead cheerleader camp) over the subject leaves me deeply skeptical that many of our so called “tech leaders” have any more credible insight into this issue than the average science fiction movie fan.

As Asimov understood more than a half century ago, the greatest danger posed by the advent of AI is overly rapid social change and economic collapse, not from a machine “transcending its programming”. We’ll be struggling with those effects long before any of the “self-improving rogue AI wipes out humanity” scenarios he likes to fantasize about become remotely plausible, if indeed they ever do.

Deseret
Guest
Deseret
7 years ago

DA:As Asimov understood more than a half century ago, the greatest danger posed by the advent of AI is overly rapid social change and economic collapse, not from a machine “transcending its programming”. We’ll be struggling with those effects long before any of the “self-improving rogue AI wipes out humanity” scenarios he likes to fantasize about become remotely plausible, if indeed they ever do.

Absolutely. The issue they bring up, though–without directly stating it/weaseling around it–is automated warfare where it’s not so much the intelligence of the AI that’s the problem but the layer of responsibility removed from whoever put the thing in the wild. Same with, as Peter has noted, brain hacks that use live soldiers but circumvent their decidionmaking process and cause them to shoot before even thinking about it or even being conscious of something to shoot at.

DA
Guest
DA
7 years ago

Deseret: Absolutely. The issue they bring up, though–without directly stating it/weaseling around it–is automated warfare where it’s not so much the intelligence of the AI that’s the problem but the layer of responsibility removed from whoever put the thing in the wild.

*Shrugs* I’m not the best person to have this discussion with–my hippy dippy sensibilities require me to hem and haw over the construction of any machine designed to more efficiently eradicate human life–and yet I’m lazy. If there’s no compelling argument to be made for a likely way to avoid such a proliferation, then I don’t devote many cycles of computation to it. Ex: Gun control in the US–the practical argument of *could* such a culturally ingrained social blight have a realistic path to eradication that doesn’t involve profound crisis completely obviates discussions of *should*. There’s no point in computing the latter if the former makes it moot. Therefore I concern myself with what can be achieved with the practical reality.

In this case, the automation of war machines *will* happen. Even if good faith efforts convince the majority not to pursue it (unlikely), it only takes one to embrace a paradigm-shift, to compel all to follow suit. (The first nuke compels the following 1000 nukes).

Musk and Co. seem most concerned with the perceived fallibility of AI systems, as well as advancing alarmist ideas with regard to empowering machine intelligence over humanity. But what about the fallibility of human systems? Human control didn’t stop widespread civilian casualties over the last decade, even in so-called ‘precision” drone strikes. If AI systems *only* ever reach human levels of target selection and “execution”, then they still potentially represent a net moral gain over human led warfare in that machines don’t rape, don’t dehumanize, dont suffer from psychological trauma the same way that human troops do.

If one opposes military automation from the point of view that it makes machines increasingly vulnerable to manipulation by human actors over the next century or so, I think that’s a completely practical concern. If one opposes it from some sort of nebulous idea about removing responsibility from the human actors who deploy them, I have to ask–when has that ever been a thing?

B. Student, Ph.D., CFA
Guest

Dear Peter,
First, not apropos of the post, I wanted to share this with you (found on linkedin). No comment necessary: https://www.linkedin.com/feed/update/urn:li:activity:6303679256920616960
Actually, because the Investment world is dominated by the behavioral bias of herding, and due to some bad experiences with overfitting to in-sample data in the first wave of so-called neural networks in the 1980’s Machine Learning and AI have been taboo in mainstream investment – probably 10-25 years behind Google on average. There are a few hedge funds and “secret projects” engaged in modern AI practice.

Most behavior like the chart shown is bugs in dumbass programs, unexpected interactions between dumbass programs (much like the textbook on Amazon that began selling for $14K), or people adding an extra zero to a trade by accident so that it is alarmingly large, yet plausible.

FYI, for the first time ever AFAIK, there is an academic / industry conference (Fall 17 meeting of the 50yo Inst for Research in Quantitative Finance being held in Vancouver in mid-October) that is having an afternoon session in which real investment practitioners who are doing sophisticated AI/ML algorithms for investment (of the trade once-or-twice-a-day kind) will be describing what they do. One actually does combine Bayesian Reinforcement Learning with Genetic Programming – the latter methodology ref. 1992 book and sequels by Koza et al (books that cost money) or the free-download 1994 Ph.D. Dissertation “Selection, Recombination, and the Genetic Construction of Computer Programs” by Tackett. The dinner speaker will be Michael Bowling of U Alberta and head of the Deep Stack program (computers that beat humans at poker by “learning deception”, i.e. bluffing). You can find his talks online if you search of “AI goes all in.” Article was published Feb or Mar in Science.
Happy to answer questions or provide pointers if you are interested, my direct mail is hidden in the post.
– B

Deseret
Guest
Deseret
7 years ago

Figures. Elites build AI bots to see if they can offload day trading and bots start speaking another language. Dagnab fereners!

Facebook’s Bob and Alice Shut Down Because FB Didn’t Know What They Were Saying