PortableComputers

In September last year, I visited the Computer History Museum in Mountain View, California. The museum is dedicated to preserving and presenting all aspects of the computer revolution, from its roots in the twentieth century to self-driving cars today. What is remarkable is to observe, while walking through the more than 90000 objects on display, the profound change in technology over the last three decades. The mobile computing display, I thought, summarised this change best, showing the first laptop computers of the 1980s (see image above) to a modern-day iPhone. But what also became clear from the exhibitions was that those ‘in the know’ at the start of the revolution were right about the transformational impact of computers, but almost certainly wrong about the way it would affect us.

We are now at the cusp of another revolution. Artificial intelligence, led by remarkable innovations in machine learning technology, is making rapid progress. It is already all around us. The image-recognition software of Facebook, the voice recognition of Apple’s Siri and, probably most ambitiously, the self-driving ability of Tesla’s electric cars all rely on machine learning. And computer scientists are finding more applications every day, from financial markets – Michael Jordaan recently announced a machine learning unit trust – to court judgements – a team of economists and computer scientists have shown that the quality of New York verdicts can be significantly improved with machine learning technology.  Ask any technology optimist, and they will tell you the next few years will see the release of new applications that we currently cannot even imagine.

But there is a paradox. Just as machine learning technology is taking off, a new NBER Working Paper by three economists, Erik Brynjolfsson, Chad Syverson and Daniel Rock affiliated to MIT and Chicago, show something peculiar: a decline in labour productivity over the last decade. Across both the developed and developing world, growth in labour productivity, meaning the amount of output per worker, is falling. Whereas one would expect that rapid improvements in technology would boost total factor productivity, boosting investment and raising the ability of workers to build more stuff faster, we observe slower growth, and in some countries even stagnation.

TrendGrowthRates
This has led some to be more pessimistic about the prospects of artificial intelligence, and in technological innovation more generally. Robert Gordon, in his ‘The Rise and Fall of American Growth’, argue that, despite an upward shift in productivity between 1995 and 2004, American productivity is on a long-run decline. Other notable economists, including Nicholas Bloom and William Nordhaus, are somewhat pessimistic about the ability of long-run productivity growth to return to earlier levels. Even the Congressional Budget Office in the US has reduced its 110-year labour productivity forecast, from 1.8 to 1.5%. On 10 years, that is equivalent to a decline of $600 billion in 2017.

How is it possible, to paraphrase Robert Solow in 1987, that we see machine learning applications everywhere but in the productivity statistics? The simplest explanation, of course, is that our optimism is misplaced. Has Siri or Facebook’s image recognition software really made us that more productive? Some technologies never live up to the hype. Peter Thiel famously quipped: ‘We wanted flying cars, instead we got 140 characters’.

Brynjolfsson and co-authors, though, make a compelling case for technological optimism, offering three reasons for why ‘even a modest number of currently existing technologies could combine to substantially raise productivity growth and societal welfare’. One reason for the apparent paradox, the authors argue, is the mismeasurement of output and productivity. The slowdown in productivity of productivity in the last decade may simply be an illusion, as most new technologies – think of Google Maps’ accuracy in estimating our arrival time – involve no monetary cost. Even though these ‘free’ technologies significantly improve our living standards, they are not picked up by traditional estimates of GDP and productivity. A second reason is that the benefits of the AI revolution are concentrated, with little improvement in productivity for the median worker. Google (now Alphabet), Apple, and Facebook have seen their market share increase rapidly in comparison to other large industries. Where AI was adopted outside ICT, these were often in zero-sum industries, like finance or advertising. A third, and perhaps most likely, reason is that it takes a considerable time to be able to sufficiently harness new technologies. This is especially true, the authors argue, ‘for those major new technologies that ultimately have an important effect on aggregate statistics and welfare’, also known as general purpose technologies (GPT).

There are two reasons why it takes long for GPTs to be seen in the statistics. It takes time to build up the stock necessary to have an impact on the aggregate statistics. While mobile phones are everywhere, the applications that benefit from machine learning are still only a small part of our daily lives. Second, it takes time to identify the complementary technologies and make these investments. ‘While the fundamental importance of the core invention and its potential for society might be clearly recognizable at the outset, the myriad necessary co-inventions, obstacles and adjustments needed along the way await discovery over time, and the required path may be lengthy and arduous. Never mistake a clear view for a short distance.’

As Brynjolfsson and friends argue, even if we do not see AI technology in the productivity statistics yet, it is too early to be pessimistic. The high valuations of AI companies suggest that investors believe there is real value in those companies, and it is likely that the effects on living standards may be even larger than the benefits that investors hope to capture.

Machine learning technology, in particular, will shape our lives in many ways. But much like those looking towards the future in the early 1990s and wondering how computers may affect our lives, we have little idea of the applications and complementary innovations that will determine the Googles and Facebooks of the next decade. Let the Machine (Learning) Age begin!

An edited version of this article originally appeared in the 30 November 2017 edition of finweek.