Financial Times: Algorithms can easily become scapegoats for volatile markets
TSAI
In the sci-fi-filled 1970s, nothing excited me more than the potential for computers to do what humans do and improve the world. Today, they help keep airplanes aloft, help doctors treat patients, and help investors manage portfolios, all while taking some of the stress off humans.
Yet, for many, how computers and the algorithms that run on them work, and why it’s important to understand them, remains a mystery.
Algorithms sometimes get a bad rap. They are imagined as autonomous “black boxes” with no common sense or reason driving them. Their uncanny nature leads to an unfair double standard—humans enjoy a fundamental level of trust despite their flaws, while algorithms are seen as divorced from human-style reasoning and are considered inherently less trustworthy.
Because of this double standard, traditional investors tend to see a fundamental difference between a human decision to buy or sell a stock and one executed by a computer. When markets become volatile, the knee-jerk reaction is to blame “algorithms” for being the cause.
So let’s start with a definition: an algorithm is a set of rules, usually mathematical in nature, designed for a computer to follow. The instructions on shampoo—“wash, rinse, repeat”—are an example of an algorithm (albeit a badly designed one; it causes a computer to wash your hair forever).
Algorithms are far more complex than shampooing your hair, but fundamentally, they are just instructions. In many cases, humans can and often do the same tasks—like identifying undervalued stocks—just more slowly and more expensively. Like human investors, algorithms do this systematically, so good investments will hopefully outweigh losing ones. Of course, there are some important differences between humans and algorithms.
Among the more obvious ones are that people get tired and distracted. Emotions and biases come into play, too, and even human motivations aren’t always constructive, especially in high-stress situations.
At the same time, algorithms carry out their instructions without bias or feeling; without the excitement that comes with overconfidence or the fear that accompanies loss aversion. Stress never affects their performance, and they never act on instinct.
This is not to say that algorithms don’t sometimes perform poorly. Especially in very complex applications, such as machine learning, it’s challenging to explain exactly how an algorithm reached its conclusions. Importantly, humans have the same shortcomings, as behavioral psychology research by Daniel Kahneman, Amos Tversky, and others has shown.
The main difference is that well-designed algorithms typically undergo constant iterative testing and improvement. Compare a human driver to the software that controls a self-driving car. A human must pass a driving test, but that hardly demonstrates his or her true ability to drive a car. Beyond this one measurement, it’s anyone’s guess what our new drivers will do on the road.
Algorithms typically undergo constant iterative testing and improvement.
In contrast, the algorithms in self-driving cars capture a vast swathe of data for every mile driven, which engineers use to improve performance. This process of testing and validation is endless, and because of it, engineers have a higher degree of confidence in what their algorithms can and cannot do.
This approach—essentially the scientific method—is the most effective way to make progress on almost any difficult problem. Hypotheses, formulated and rigorously tested on the basis of solid evidence, are virtually the only guarantors of the integrity of decision making available to us. Furthermore, having a scientific mindset means that the process of inquiry never ends; scientists reach the best conclusions our data and capabilities allow, but our search for better answers is, by definition, never complete.
Whether in the realm of self-driving cars or investment management, the scientific method is key to the safe and effective use of algorithms. Far from being autonomous black boxes, algorithms in these and other fields are the result of painstaking research by experienced humans who harness vast amounts of data and powerful infrastructure to process it.
Designed from a scientific perspective, these algorithms represent a significant step forward from the days of the past when critical decisions were made based on bare-bones intuition—often supported by preliminary evidence at best. Algorithms are not as mysterious as they seem, nor do they deserve to be called bogeymen.
While both humans and algorithms can perform poorly, this should not obscure the inherent strengths of algorithms in certain situations. Instead, we should acknowledge that they can deliver incredible amounts and embrace them where they are fit for purpose.