AI, Data, and the Economy

September, 2023

The 19th century English polymath, Charles Babbage, widely regarded as the “father of the computer, once remarked that “Errors using inadequate data are much less than those not using data at all.”  He could have barely conceived the amount of data the development of his ideas would go on to create.

Sir Arthur C Clarke, British scientist, mathematician, and science fiction writer once estimated that the entire life experience of a human being would amount to 1 petabyte of data. A single petabyte is equivalent to 328 quintillion bytes (the basic building block of computing) or 328 trillion megabytes or 328 billion gigabytes.

According to Statista, this year humanity will produce 120 zettabytes of data. One zettabyte is 984,000 petabytes or a number 1. followed by 21 zeroes.  We are awash with data. The real question is what to do with it and how to make sense of it.

As they say, follow the money!

The primary use of this explosion in information, to date, is advertising.

Targeted ads based on aggregations of all our data; shopping and spending habits, have made Amazon, Facebook (Meta) and Google some of the most valuable companies in history. In that sense the internet is merely a glorified catalogue with the proviso that it remembers what you bought and when. It is thus able to build a detailed picture of you that can be startling in its depth and detail.

Consequently, “data” is regarded by many as the 21st Century “Oil” – black gold transmuted to coded gold.

The next phase of this leg of computing evolution is thought to be the development of artificial intelligence or “AI.” To date, no computer, as far as we know, has convincingly passed the so-called “Turing Test” and few people are convinced that the chatbots masquerading as humans really are.

Nonetheless, the development of “large language models” (LLM) and the capacity to handle inconceivable amounts of information alongside the capacity to forge previously unrealisable links between data points suggests that the impact of this data revolution has barely begun.

However, for mortals lesser than Dr Manhattan, the only way we can make intelligent use of large datasets is by using statistical methods. But then, as the leading American thinker Homer Simpson put it, “People can come with statistics to prove anything, forty percent of all people know that.”

Despite the limitations, and in the spirit of Babbage, we make a great deal of use of publicly available data in our work. The direction of the economy; UK, US, Asia, or China is clearly important when we consider the implications for growth, inflation, industry, and the cost of capital.

Equally, we are living through an extraordinary period of economic change, not least, the transition away from zero interest rates reflected in the higher mortgage rates that are currently causing distress to young borrowers.

Thus, we depend on the astonishing work of the US Bureau of Economic Analysis, Eurostat and, amongst others, our own Office for National Statistics (ONS) for the data that allows us to shape a considered view of the outlook for investment, trade, business costs and personal consumption.

Yet, one of the paradoxes of the almost infinite amount of data available to statisticians is that almost all statistical releases are subject to revision. It reflects the fact that we cannot know what we don’t know. Not all data with respect, for example, to a GDP forecast arrives at the same time. So, we have this tension between the volume of data, the timing of data and the interpretation of data.

As you may know, the ONS took some considerable criticism this week as they revised up Britain’s recovery from the Covid pandemic. We were doing better than we all thought after all.

Yet, the ONS can hardly incorporate, as yet, unrealised data into its appraisals. In the real world, all economic data are revised frequently. What caught the ONS out was the scale of the revisions as the impact of Covid and the Ukraine became visible. Similar revisions to other economies’ data are all but inevitable. 

The onset of AI is likely to refine, over time, this whole process.

It will be able to make connections between time, place, events, trends, and seasonality based on datasets that are unimaginably large. It won’t be able to know “history” in advance*, but it will have a database of historical data and the capacity to use it in a way that we simply cannot do today.  

The state of the economy and the conduct of monetary/fiscal policy has a direct bearing on the wealth of us all. Will the advent of AI make that policymaking easier and more effective? We’ll have to wait and see.

In the meantime, we would echo the former Chairman of Netscape, Jim Barksdale; “If we have data, let’s look at the data. If all we have are opinions, let’s go with mine.”

*Isaac Asimov’s “Foundation” novels, from the 1940s, currently being serialised on one of the networks, is a literary/scientific view of how AI might preview and direct the course of human history.

Related posts