(MoneyWatch) If you think there are hidden forces at work on the stock market, you are correct. It's not a conspiracy, though -- rather, those patterns are the result of computer algorithms that execute trades in milliseconds, causing market micro-surges and crashes that come and go faster than anyone can see.
From 2006 to 2011, these nano-markets spiked and collapsed more than 18,000 times, according to new research. Most interestingly these "ultra-fast extreme events," or UEEs, as they are called, appear to have foreshadowed the financial crisis in 2008. Professor Neil Johnson is the University of Miami physicist who lead the team that recently discovered this phenomenon. He says that for more than a year prior to the crash these algorithms focused their activities on financial stocks.
The implications of this emerging, but unseen, electronic world also extend far beyond the world of finance, so we asked Johnson to explain it.
How did a physicist wind up examining the stock market?
Neil Johnson: I work in an area of physics that looks at collective behavior. We're all actually familiar with the idea of collective behavior: When you go out on Route 1 and you see a traffic jam and no one person caused that traffic jam usually, it's just the collective effect of people deciding because of their complicated strategies there's a convergence of activity. You see a lot in physical systems as well. There are huge transitions that can occur when all the individual particles suddenly decide to do something together without anyone directing them to do it.
Markets are an amazing area to study because the data is so precise and it's being collected all the time. It's not only being collected on the days, hours, minutes and seconds -- it's down to fractions of a second, milliseconds. I became interested in what was going on down in the millisecond scale because we know that millisecond scale is quicker than any person can react to and yet there are computer trading algorithms operating down there.
So what is happening at that ultra-fast scale?
A millisecond is much too quick for us to track, but it's actually quite slow for a machine. The computer algorithms that are used heavily for trading are designed to go in and look for certain types of opportunities, certain behaviors or patterns in the prices. And if prices are being quoted down on the millisecond scale, which they are, a millisecond is a long time for a machine. A machine can make many, many computations on that scale and decide on a strategy and jump in and do something.
Of course what happens is there is not just one machine, just like there's not one motorist on the road. You get mobs of these machines doing certain things at certain times and, just as a traffic jam can emerge out of thin air, so to can these huge events.
So they're reacting to each other? Like one program says, "I will sell at X price" and the other program responds to that?
Yes. In fact, they are attacking each other. Of course you don't have to be smart enough to predict the market; you just have to be smart enough to predict what the other person predicting the market is going to try to do. Even without being able to predict the market, if there's an algorithm that knows that there's another algorithm around that has a certain type of behavior, then it thinks it has locked into a pattern. All I need to do is sit around and pounce on that particular algorithm.
So that's a lot of the activity that's going on: They're feeding off each other in a predatory sense. Just like when there's a whole bunch of rats in a cage. The food in that sense is the price. They feed off the patterns that they see. They all want a piece of the action at the same time.
What did you find when you examined the data recording these ultra-fast extreme events?
Everybody knows about the 2008 crash [during the financial crisis], and fortunately crashes occur not very often, but every few decades. People have started to hear about mini-flash crashes that happen on the scale of minutes, so we began to wonder what happens below one second, where these machines are operating.
To our surprise, we discovered that since 2006 there have been more than 18,000 of these crashes and spikes. Crashes that, if they happened in real time over the scale of months, would wipe out the markets. Almost 100 percent drops in price and rising in price.
Why haven't we felt the impact of that the real financial world? Why hasn't it affected things we can see?
We think it hasn't broken things, but here's the interesting thing: When we tracked these tiny little eruptions through the 2008 crash, we found that they were escalating, like fractures propagating. And the stocks in which they were propagating were the financial companies like JPMorgan, Goldman Sachs and Lehman Brothers. These were the stocks in which these fractures were starting appear. It's almost like the machines were sensing a weakness in that stock and testing it out more and more and more. I really think it's more than a coincidence that there was this huge build-up on the millisecond scale of these events prior to that 2008 crash.
What caused these algorithms to behave in this way?
What they're probably picking up on is extremes in people wanting to instantaneously either sell or buy a batch of stock. They're moving so fast and looking at every transaction that they pick up on. So it's this ecology of all these diverse algorithms fighting it out when they sense there is weakness. This is also the first example of a purely machine-driven kind of world that developed spontaneously.
It sounds almost like white blood cells finding out about a wound and swarming to it, without one thing directing them to do it.
You're absolutely right. Just as we know the white blood cells have no central controller, the algorithms have no central controller. So it's the spontaneous effect of them all being there at the same time that we end up fearing, just as we know that can create all sorts of medical problems by kind of attacking itself in the wrong way. We don't even know if it's good or bad because it's such a complicated situation to understand.
It's possible mobs of algorithms can cause disruptions to computer systems. If that's true, then someone's going to think of that as a way to attack an infrastructure. Rather than the classical idea of someone sitting at a terminal and just hacking into a password, why not send this swarm of algorithms that you don't even have to control? They will just do their jobs and in the process create these extreme behaviors that could be hard for a system to deal with.