Can AI help us to understand the way that economic expectations drive the financial sector’s response to external shocks?

The way we understand economics and its influence on financial markets is largely driven by expectations for the future. Financial markets and the commentary written about them naturally look to the future to understand what might happen next even though we only have information about what has already happened. The concept of ‘expectations’ is shorthand for the way we create a story with the data we do know, influenced by ‘sentiment’ about economic outcomes today and in the future.

Concepts like expectations and sentiment are difficult to measure in real-time and often rely on infrequent surveys. This infrequency and reliance on often small samples make it difficult to track how expectations are changing, except indirectly through the impact these changes have on financial data.

We have interviewed researchers at the University of Buckingham, University College London, and Princeton University who are training an AI model with the input from finance sector experts that they expect will help to evaluate sentiment from a richer and higher-frequency universe of articles to see if AI can help to more clearly deduce expectations than the methods used today.

This is an important project to understand how financial sector players create and update their perceptions of risk, which influences how external shocks to the economy and financial system are transmitted through the industry. The increase in impacts of climate change on the physical world and economy will represent a continuous series of ‘shocks’ to the physical environment and economy. A better understanding of the way that exogenous shocks are transmitted through the financial sector will be important to safeguarding against future crises.

The project discussed in the interview below also provides an opportunity for people with experience in the financial sector whose experience includes assessing types of financial risks. We have provided a form if you would like more information about participating in the project. Participation in the project should take about an hour to complete. Participants will receive information about the project and its results, and will have an opportunity to use the AI model that is being trained after the project concludes.

Interview with Dr. Ali Kabiri and Brandon Davies

Blake Goud (RFI): Could you give an introduction to your background and the project that you're working on with AI and finance?

Brandon Davies: My name is Brandon Davies. I'm here because I'm involved with this AI project through my work teaching master's courses at the University of Buckingham. My background, however, is not as an academic. I am an economist and my background is in the financial industry, including working on structured products at Barclays Capital and then as Treasurer of the retail operations of Barclays Group. I was on the management board of the bank before retiring. In retirement, I've worked for central banks setting up training programs for central bankers and bankers around the world, particularly for Indonesia and China.

Dr. Ali Kabiri: I am Professor Ali Kabiri at the University of Buckingham and I also work at University College London (UCL) in the Centre for the Study of Decision-Making Uncertainty, a research group led by Professor David Tuckett. As Brandon mentioned, at Buckingham we do a lot of work together on banking risk and this applied project focuses on understanding risk. In my research, I'm very interested in the interconnections between the financial system and the economy in general and one of those channels relates to risk, which often considers perceptions of what will happen in the future.

Blake Goud: Can you give an overview of what the project is? How are you using AI and what conclusions are you trying to reach in the project?

Dr. Ali Kabiri: The project is to create an artificial mind from the inputs of human labellers evaluating text data, which are then used to train or ‘fine tune’ a language model. What we're aiming to do in this project is to get financial professionals, especially people who have high-level domain expertise evaluating financial risk, to train these artificial minds. These language models can then be applied to various sources of text data to see if we can essentially divine the future path of the markets and economy from their perceptions.

Brandon Davies: When I first entered the dealing room, all the data we had was analogue. It was shown up on TV screens but you couldn't capture the data unless you could remember it because it just flashed across the screen. A lot of the work I did was digitizing the dealing room at what became Barclays Capital and then developing models based on that historical data. However, one of the things I've always taught students is that the only data you want in your models is future data. But you can't have it because it doesn't exist.

Words and phrases give us a whole other dimension to apply on the data that we do have because when we write about finance, we usually write about the future, not the past. A lot of what appears in published articles is reporting the data we do have and looking to the future implications. Having the ability to look at the information reflected in the phrases used in these articles and to determine what they tell us about people's expectations will be very valuable, because expectations are extremely important in how we think about economics from a practical perspective.

The economist John Maynard Keynes talked about the way that expectations are set, and how expectations can shape people’s attitudes, which then go on to affect what people do and the outcomes that result. From an academic perspective, I've always found this concept interesting and I've always thought it had practical implications as well, and AI is a tool that might help.

There are two big practical issues that people face in the industry in this case. One is that AI models cost an awful lot of money to run because they rely on so much data. If there is a way to reduce the data needed to train the model, that may have a significant effect on the amount of expense you've got in running the models and their utility.

We have done some very preliminary work on this and the results so far have been quite dramatic in lowering the runtime to train a model with experts down to one percent of what is required if you purely use a large language model just left to its own devices. We have primarily tackled this issue, but the next phase of this work could help produce logic models to better understand the logic by which models get to the results they give.

If we can do that then we may well be able to significantly improve the usefulness and applicability of these tools. I've put enough of these types of models in front of regulators and they won't usually let you use a model if you can't explain how it reached its conclusions, which is a weakness in a lot of AI at the moment that we may well be able to significantly improve on.

Blake Goud: And when you're talking about explaining and expectations is this something that can improve the understanding of the transmission of exogenous shocks throughout the financial sector?

Dr. Ali Kabiri: I think ultimately that it is an aim to see those dimensions. So how do professionals with domain knowledge react and respond to those kinds of stimuli? For example, when there is a monetary policy shock, how does perception of risk change and similarly how might endogenous changes in risk perceptions lead to changes in the financial markets. In this case, the particular area that we're looking at, as Brandon said, is textual information and how that's interpreted, and that has application across all types of shocks.

And I think we're really just at the beginning of understanding how people respond; really understanding their psychology. We have the benefit of working with experts in language and psychology and understanding the human side of the responses, but also to use computer science and language models to create artificial minds that behave like financial professionals.

It's not an area that has really been understood well and so that's what makes it exciting. We now have the technical ability to look much deeper into human behaviour in an economic context and that applies across all sorts of domains, including various different types of exogenous shocks.

Brandon Davies: One of the things we're doing with the model is very much looking at the words that people use and of course, they're very contextual. Because of that, in finance if you can limit the model to looking only at finance publications then you may be able to significantly cut down on the amount of data you're using because the words will always be in a particular context. An example I give to students is ‘long’ and ‘short’ have a particular meaning in finance that they certainly don't have in general usage. You don't want the model to go off chasing the general usage. You want to restrict yourself to the specific definition that is relevant to finance.

Blake Goud: You mentioned several of the challenges about taking these models and applying them in the context of the finance sector. How does the project indicate a way forward in terms of addressing things like data privacy, cybersecurity, and bias in models?

Brandon Davies: If you look at models it does depend a lot on what you're modelling as to whether you will want bias in the model or not. Bias in AI models basically has two main types:  data bias and societal bias. Data bias refers to bias embedded in the data used to train the AI models.

A good example of data bias was the case of  facial recognition software having larger error rates for minority ethnic people, particularly minority women. The reason for this was the input data was skewed towards non-ethnic minorities. As the model tries to optimize prediction accuracy the over-representation in the training set leads to these errors.

These are the types of errors you don’t want. The societal level biases are where legacy ideas or norms from a society cause blind spots. This was seen in the case of a recruitment algorithm developed by Amazon, where female applicants were negatively scored  because the algorithm was trained on resumes submitted to the company over a 10-year period and reflected the male dominance of the industry.

There could be some advantage for our study, in the financial sense, to discover biases when reacting to financial news, as these may help explain the economy and financial markets. In some ways when we ask people for an immediate reaction to something it's bound to contain some of their biases, which provide explainability for what we see in reality. A lot of these biases are captured by the sayings used in dealing rooms, like he ‘who panics first panics best’ or ‘you know, don't fight the Fed’.

You know these are biases or sources of bias but nonetheless we would want them because if decision makers are using them, it provides a frame of reference in their mind, which is exactly what we want to capture.

Blake Goud: So is the focus on the explainability that you mentioned particularly challenging with regulators? Are you developing this project as a way to explain why particular results are coming out of the AI, and identify what biases are influencing the outputs rather than just trying to train the model to edit them out over time?

Dr. Ali Kabiri: That's a really important point and this is something that is a fairly recent development in the project because of the existence of a particular type of text-to-text ‘chain of thought’ model. The technology exists now if you can get the right domain experts and have them explain the chain of reasoning for an outcome from the AI model.

Being able to train it in that way, which is more extensive, you could then ask the AI to tell you why it's making that decision. What are the thought processes going through people's minds before they or the artificial mind give you that output would be a very exciting part of it.

Brandon Davies: The focus on explainability won't be for the initial model; it is something we are considering for the future development of the model. To start, we are asking people to help train the model with two types of reactions about the impact of a series of specific shocks. One reaction that participants would provide is an evaluation about the shock itself, such as the impact of a shock on a particular company or market. The second reaction would catalogue the expected macroeconomic implications of the shock.

A participant would help us train the model by scoring in two ways on a -5 to +5 scale. First, would you, in a sense, buy or sell based on the shock and then would you consider it to have a ‘risk on’ or ‘risk off’ impact. To give you an example of why we do it that way, say a big oil company were the subject of a question and oil prices are going up, this might be very good for the oil company but perhaps very bad for the economy. There's a need to look at those two, both the micro risk and the macro risk.

Blake Goud: For people reading this who are in the finance sector with experience in assessing financial risk who are interested in AI, are there opportunities for them to help with the project? How would they benefit from participating and understanding and learning more about AI models and how they work?

Dr. Ali Kabiri: One of the main aims or one of the main benefits for participants is engaging on a professional level with these types of language models and AI models, to appreciate how they are used and will be used in the future. There is an exponential expansion of the use of AI now and its capabilities are, without exaggerating, growing exponentially.

There has been a continuous upside surprise. In that sense, it's clearly something that will be important in the future, and being able to see how these models are developed, I think, is a great learning experience for those who participate. The model that we generate will also be available to them. This is going to be done on an open platform to people who are involved and they may develop it further or have ideas for development.

Brandon Davies: If I go back to my days working on the analogue-to-digital transition, one of the things that scared everybody at the time was a lot of people thought that dealing rooms would be much smaller and run by a relatively few quants after the transition to digital. When I started in the dealing room, it had about 100 people and when I left, it had grown to about 300.

We were dealing in markets that did not exist when we operated only with analogue screens. The products did not exist. The markets exploded on the back of the transition to digital screens and it wouldn't surprise me to see something similar happen again with the introduction of AI.

The number of people in the dealing room had grown, but the roles they filled had changed quite a lot in terms of the skill base they brought to the job. People who came to terms with the introduction of digital data and understood digital markets grew enormously in their careers and those who didn't, didn't.

I'm quite bullish about the future for people who pursue the new opportunities that AI will bring. I don't think it's the end of dealers or investment managers or anything else. I think it's just a whole new way of working.

Blake Goud: Yeah, we've seen similar things in terms of the introduction of the ATM machine not putting bank staff out of work. It just changed what they're doing day-to-day. Do you have any final thoughts before we wrap up our discussion?

Dr. Ali Kabiri: I think there's a really good point that Brandon is making that these kinds of technologies are likely to expand markets rather than replace jobs. There are vast and untapped markets across the world where the barriers are often frictions that have historically limited expansion. And we are at the stage with this technology where many of those frictions are going to disappear very quickly and these technologies will empower and expand finance.

Brandon Davies: Yeah, I couldn't agree more. For people interested in participating in this, you’ll get to learn a lot about one of the technologies that’s at the forefront of these trends, and it's only going to take about an hour of somebody's time. If they participate, we will be explaining the project in more detail as well as the results and what they're producing. And as Ali says, probably give people access to the model itself so they can use it to try it for themselves.

Previous
Previous

Investing in responsible finance will pay dividends

Next
Next

OIC banks can improve their climate impact by learning from the challenges of global banks