Trading Tomorrow - Navigating Trends in Capital Markets

Navigating AI and Financial Markets with Alvaro Cartea

Numerix Season 3 Episode 30

In this episode, host Jim Jockle sits down with Alvaro Cartea, Director of the Oxford-Man Institute of Quantitative Finance and Professor of Mathematical Finance at Oxford University. Together, they explore the transformative power of AI in financial markets and delve into how deep learning and reinforcement learning are reshaping trading strategies. Alvaro explains how these technologies uncover patterns humans can miss and how they’re personalizing trading models to fit unique market views. He raises crucial questions about the unintended consequences of autonomous algorithms, like the risk of AI-driven market collusion, and discusses what this means for future regulation and oversight. Tune in for a deep dive into the future of finance!

Jim Jockle:

Welcome to Trading Tomorrow Navigating Trends in Capital Markets the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. To help us navigate this topic, we're joined by , director of the Oxford man Institute and Professor of Mathematical Finance at Oxford University.

Jim Jockle:

Alvaro works at the intersection of financial economics, mathematics and data science. His research spans over several fields, including algorithmic and high-frequency trading, market microstructure, mathematical finance, asset pricing, commodities markets and financial regulation. With decades of experience shaping the financial landscape through cutting-edge quantitative models, alvaro has been at the forefront of innovations transforming how markets operate. In today's episode, we'll dive deep into one of the most significant technological shifts facing the financial industry today. From algorithmic trading to risk management, ai reshapes how firms make decisions, manage portfolios and maintain a competitive advantage. We'll explore the implications of this technology for quants, data scientists and financial professionals alike. Well, first and foremost, I want to thank you so much for joining me today.

Alvaro Cartea:

Well, I'm delighted to be here and thank you for having me.

Jim Jockle:

So let's jump right in it. How has AI recently changed the landscape of quantitative finance?

Alvaro Cartea:

Okay, so AI has been around for decades, right, so it's not a new thing. People tend to think this is something new, but what has actually changed is that computer power and access to data these days and that we can process the data has completely changed the way we operate in markets, right? So the question now is whether you have the money to buy the machines, to buy the data and then to process and act on all of that information.

Jim Jockle:

And so, but what specific AI technologies or methodologies are having the most significant impact?

Alvaro Cartea:

you know, specifically around quant strategies, so you know I'm an academic, so I, but I talk to a lot of people in the industry. You know I'm an academic, but I talk to a lot of people in the industry. But for sure I think deep learning and reinforcement learning. So let me say a couple of things about that.

Alvaro Cartea:

So deep learning, I mean, so one thing AI tries to do is mimic how the brain works, how the brain makes decisions. So you use artificial neural networks, so these are layers that you train and then they do things for you, like making decisions. But when you have many of these layers, that's what we call it deep learning. That's pretty much, I would say, everywhere. And then reinforcement learning is perhaps the trickiest and most important one, depending on how you want to take this conversation, which is these are learning algorithms that, by trial and error, end up training themselves how to best act, given what the market is doing, what the market has seen in the past, how the market has reacted to your own actions. So that's the reinforcement aspect of it is you act and then you receive information and then sort of retrain and recalibrate. So trial and error will be an important aspect of RL, which is reinforcement learning.

Jim Jockle:

So you're almost discussing how machines can almost learn to mimic human behaviors. Is that accurate?

Alvaro Cartea:

Well, so you know, let's focus mostly on financial markets. And you know, my view is very simple. If you start looking at history, how things have been done in the past, so people, you have people in the pit trading, doing things, and then compute power and the internet, and all this thing comes along and at some point what you end up doing is framing the computer or coding the computer up to do what you know how to do best, which is to trade. And you, you know, teach, know, teach the, you know, you code things up, so all the tricks you know are implemented and so on. So it is not that the computer is mimicking the way you act, but you initially train the computer to act in the best way you think they should be acting.

Alvaro Cartea:

That's part one of the film. The second part is where we are now, which is we have moved on to algorithms that learn things, or reinforcement learning, learn things that perhaps you never thought about. So it could well be that these deep learning algorithms or reinforcement learning algorithms are doing what you train them to do, but also they're so powerful and they investigate many avenues that you never thought about many patterns in the data, interconnection between data and markets and so on, and then they might stray away from what you would have thought was your own thinking on your own way of trading. But that's the beauty of it, that you have all this power to do what you think you should be doing. But the machines will also try and investigate unexplored terrains that you hadn't thought about.

Jim Jockle:

So let's dive in on that a little bit. Trading in many ways is personal, right. Traders have their own views of the market, their own experiences, specific strategies per a particular asset class right. So in training these models, how specific can it come to one person? Regardless of how things are, you know how the machines are going to explore the revenues that perhaps an individual wouldn't. Is there a standard of which these models are trained or can it get that personal you know, to an individual?

Alvaro Cartea:

Okay. So to be individual can mean many things. It could be one person, a trading desk or a whole firm. Clearly, if you have your own views, the way you design an algorithm, the way you code it up and the data you feed it to train it, and then the way you deploy it and reassess will really much define how the alggos behave and what they do. So in that sense, they're very personal.

Alvaro Cartea:

So I develop an Argo to make markets, for example, but I know that there's something very specific in this particular asset class that I'm looking at. So I do feed the Argo the important things that I think will make this algorithm unique and competitive. Right, this is the important thing. So of course, everybody will have an edge. Like if you look at cars, all cars are different but they're all trying to win a race. They all will have an edge and they'll have.

Alvaro Cartea:

Your sort of footprint will be in the way you code it and what information you use to act. So, clearly, you set the objectives and by setting the objectives, what the Argo might be able to give you is that, again going back to my initial point, is that the Argos are so powerful that can try and achieve the objective which, let's say, is maximize profits, but will try or explore anything that's allowable within the black box that you have trained and coded up to achieve that objective. So it could be making markets, it could be portfolio decisions, it could be, you know, trading corporate debt, but you know, in the end the objective is very important. But the algorithm is something I normally say has no moral compass. Algorithms are not trained to behave or misbehave. Algorithms are trained to achieve an objective under a set of rules and under some sort of information you feed and the information it draws out of the market, and then they run and try and do what you told them to do make money.

Jim Jockle:

And now but your research. You have raised some concerns about machine learning and algos. Right, you know and you know we spent a lot of time on this podcast talking about the power, but you know to what extent in your research you're talking about the algos potentially colluding in the markets and you know how serious is this threat. Could it manifest in real world trading scenarios? You know? Please share your thoughts here.

Alvaro Cartea:

Okay. So first of all, this research agenda is pretty much what we've been you know, some people here in the Oxfam Institute and I have been working on for a few years, four or five years. I spent quite a long part of my career looking at algorithms, but the last four or fives in the aspects of unintended consequences of learning algorithms. So let me just split this into two important things we do and research we do. Some is very theoretical and some is more empirical. On the theoretical side, we have asked the question can algorithms collude, right? So first of all, we are focusing on unintended consequences that may not be good for the market. This doesn't mean that you know all algos will collude, but the first question is can they collude? And we show under a very general sort of assumptions, mostly based in game theory, that indeed algos can learn to collude. But what is collusion? First of all, collusion is that you know people coordinate actions to achieve an outcome that might not be a competitive outcome, and an important thing is that they achieve this by a reward-punishment mechanism. So if you don't agree, if you and I enter into an agreement, but you deviate and misbehave, I'm going to punish you. And just because I punish you. We sustain the agreement, and that is collusion. Can Argos do? This is the question. Well, yes, we show that when Argos are trained to sort of maximize profits and a few Argos encounter each other in the market, they're not talking to each other. They're not trained to talk to each other, but the way in which they learn and the way in which they update their beliefs as to what's going on in the market, the way in which they refine strategies to maximize profits, means that the Argos enter into sort of paths that coordinate you into a collusive outcome or an outcome which is not competitive, that coordinate you into a collusive outcome or an outcome which is not competitive. And let me elaborate a little bit more because it's a very subtle point. So, again, these are, those are pieces of code that people wrote, and those pieces of code say I want to maximize profits. And these are the set of rules, and within these rules there's another rule that says well, every time you act, and I see the outcome of my own actions in the market and everybody else's actions, I reassess and recalibrate the way in which I optimally behave, and that alone is where I'll go meet and then they can coordinate each other, because they're reacting to each other's actions, they're reacting to the environment, and that alone is something that can make you go into a path that will sustain an equilibrium which is not a competitive one. This is something we didn't have in the past. Right, because people need to ask themselves why now and not 10, 15 years ago. Well, 10 years, 15, 20 years ago, you would code up a machine and the rules were hardwired and you do this and you see this, and you see that and do this, and that's it. Now the rules have a rule to change behavior, to recalibrate or tweak parameters in a way that are searching for the best possible outcome, and that is where the coordination to collusion might occur. So that's one research where we show theoretically this can happen. We also show an interesting aspect, which is these machines again are designed to maximize profits, let's say.

Alvaro Cartea:

And then they learn signals, market signals, which are important. You know, we know when volatility goes up, something happens, we know. If you think of the electronic markets, like NASdaq, you say, well, um, if demand and supply pressure in the electronic book is important, it signals where prices might be going to. So I use that as a signal to make decisions and everybody does. It's kind of a signal that most people in the market will use. I I cannot think of a market player in equity, for example, who does not look at the demand and supply pressure in the electronic books. But now the Argos know that that's an important signal.

Alvaro Cartea:

So we show that when Argos learn that there's a signal, that is important. And why is it important? Because it signals something about what may be happening. But it also triggers people, people's actions, because if you think it's an important signal, it's very likely that your neighbor or your competitor also looks at the signal to make decisions. And then these Argos will learn or will try to rig the signal to their advantage. So Argos, although and this is important although you do not train them to manipulate markets, they will eventually find out that well, if I sort of put fake orders in the book, I can alter the demand and supply pressure in a sort of particular way. So I'm referring to spoofing, and that will generate some actions out there and then I can take advantage of that. So that's another research agenda where you know it is. Once you think about it, it is not a surprise that these Argos that have no moral compass will try and give you the best outcome, even though they have to rig a signal. Now, a third research agenda that also complements all of this is more empirical, that they're also complemental, it's more empirical.

Alvaro Cartea:

So what we have seen is that some of these algos whether deliberate or not, I just don't know but they try and signal who they are to everybody else or to some market participants. So let me give you an example. Imagine you, jim, you're a company trading. You're sending orders to the book to buy or sell assets, but the volume you choose every single time has a particular ending that the last two digits is a prime number or the last two digits is. You know, always the same, or you rotate them, but there's a clear combination that I know eventually, that I might not know J by name, but I know that this is perhaps coming always from the same place and I do the same, and then we can coordinate actions.

Alvaro Cartea:

Another way in which signaling seems to be happening is that the volumes you use and the volumes I use are so, so large in the book that whenever someone sees a very large limit order coming into the market, they say this is very likely, jim. They don't know who you are. But they know that all of those large orders seem to always come from someone that you know behaves in a particular way and they do the same. And then you you know these articles kind of communicate in that way and what is the outcome? Goes kind of communicating that way and what is the outcome? Well, the outcome could well be that if you and I know who we are, then we sort of coordinate into not sort of cannibalizing each other or we sort of snipe retail orders but never touch each other. So this is something we've seen in the data, right, and it's under, you know, it's work in progress, but it's something that, again, if you are the regulator and you have access to proprietary data where you can see all these things, will be an easy way in which you can start thinking about okay, are Argos communicating in different ways?

Alvaro Cartea:

And a lot of this could be inadvertent. At no point am I saying that anyone deliberately does these things. It may well be that some do. You know, I don't know today. The podcast might come out in a few days or a week time, but you know, all of the offices of many, in a very subtle way, will do this or they're not trained at all. But the algos are so powerful that they will try, with no moral compass, to give you the best deal, to maximize profits and find ways, which are not competitive, to achieve that outcome.

Jim Jockle:

You know you use the word black box and it makes me wonder, right, you know because and I do want to get on to the regulatory side of this in a moment but how are you beginning to understand? You know what transparency are you seeing within the algos to get a better understanding of that decision-making where you could potentially see the collusion? You know what kind of data are you looking at in terms of understanding the maturity of the learning process?

Alvaro Cartea:

Okay.

Alvaro Cartea:

So, first of all, very few people would have the data to make statements like the ones I've made, so you would have to be the regulator of the exchange to see trader ID to start isolating Some of it. You can eyeball Some of it even without trader ID. You can start seeing some weird patterns in the way volumes are quoted. So that's one aspect of it. So, if you have data with trader ID, you may be able to explore a little bit more and it's more transparent in that way for someone who has visibility of the data. Then the other question, which is on the periphery of this, is okay, you don't have the data, but what, what things make these black boxes more prone to colluding or not colluding? What makes these black boxes uh you know more likely to misbehave? Right, and? And for us, misbehavior is to inadvertently do things that should not happen in the competitive market. Well, if the number of players in the market is very low, right, if you have few market players or few black boxes trading, then it's more likely that you coordinate. If you and I are the main guys trading a particular asset class or a particular asset within an asset class, and we have these nice black boxes that have been very well oiled and the code is running well, it is more likely that you and I end up colluding than if we have five, 10 people like ourselves. So the number of players is a key determinant that would allow these black boxes to collude. There's a more subtle one that I don't know if we have time to get into is when you train these black boxes. So I've said this before.

Alvaro Cartea:

An important aspect that people seem to forget is that all of these argers are trained. You use data to train them. So I use data, historical data, you use historical data for yours, and then, once you understand how to set the ball rolling. So these are algorithms that all of them need initial conditions to start trading. You will always pick, so Jim will pick the ones that in the back test in the lab gave the best performance and in my lab and my sandbox gave me the best performance. But it is very likely that you and I choose starting conditions or initial conditions that, more often than not, will coordinate us into collusion.

Alvaro Cartea:

So, even if you don't have transparency of what's going on or data, you know, by the way these algorithms have been coded up and the way they have been trained before deployment that you can end up in collusive outcomes or outcomes which are not competitive. An important subtle point that, depending on your background, is interesting is you may have outcomes which are not competitive. Okay so, prices are much higher than they should have been, but they are not sustained. Okay so, prices are much higher than they should have been, but they are not sustained by reward punishment mechanism, and that's a more difficult for the regulator. That would be a more difficult one to attack. It's like oh, you know, this is an outcome that doesn't say you know that we ended up coating very widespread in the market, but we're not punishing each other. We deviate. It just happened that it was an equilibrium, that the machines were happy not to deviate or undercut.

Jim Jockle:

Now you've been in discussions with the UK Financial Contact Authority, the USSEC. What role do regulators have in addressing these types of risks? And I think a better question is are they prepared?

Alvaro Cartea:

Well, so they're fully aware, clearly. Well, so they're fully aware. Clearly, an important point to make is that the regulator is also fully aware of all the good things that AI brings to the table right. So you know, again, this podcast, we're concentrating on the unintended consequences. We don't like, but everybody's aware of all of the good things that might, you know, may come with AI and they're happy with that. So one of the problems they face is you don't want to sort of stop the benefits from this by looking for the unintended consequences.

Alvaro Cartea:

So how do they attack this? Well, they are aware, they are trying to understand the problem. They talk to people like myself and they are trying their best, best, given the resources they have. You know we need to bear in mind that the regulator I think that the manpower they have is, you know, is not the same as the financial industry combined. So it's a fight where resources are not there.

Alvaro Cartea:

But the point about collusion or manipulation has always been on their agenda. So they're looking into this and finding a way in which you can engage industry to look at this in a very constructive way. Those are the conversations I've had with them and, you know, I think the industry is quite happy to engage because, as I said, many of these outcomes are inadvertent collusion outcomes. So nobody wants to be in court being told you have an algo that colluded and you'll say, well, I never trained it to do that. But then the responsibility lies where, in the end, you have to be made aware that your algos can inadvertently manipulate the market or collude them. So the responsibility will have to fall somewhere and that's for the regulator to decide.

Jim Jockle:

I want to ask you a question that would be very, very unpopular with my colleagues, because I'm surrounded by them, because I'm surrounded by them, but I'm going to ask anyway can AI fully replace human quants, or how do you see the two working together? As I said, that wouldn't just pass my office and I'm trying to have them not read my lips.

Alvaro Cartea:

Well, you know that's a great question, right? Because you know, as an academic, you also say, well, yeah, they're going to replace us. You know Because, as an academic, you also say, well, yeah, they're going to replace us. So my take on this is well, actually, no, if anything, it's a tool that's going to make us better. Maybe the number of people you need to make decisions shrinks.

Alvaro Cartea:

It could well be, but I was at a conference last week here in Oxford, organized by the Mann Group, and one of the top cons, richard Barkley and I'm sure he wouldn't mind me saying his name had a very interesting talk where he says okay, look at the following when people used to fly planes pilots, I don't know 20, 30, 40 years ago you know they were, you know, really doing stuff for a good number of hours on a transatlantic flight. As time has evolved, the time pilots spend is very, very small and small and small, but you still meet the pilot. And then the question is okay, we will not be replaced. Our role may be, you know, as a proportion of time you spend on this, our role will be smaller and smaller and smaller. However, it will be crucial, especially crucial at times of duress, okay, and you will never be able to encode in an algorithm.

Alvaro Cartea:

If something goes dramatically wrong, someone has to intervene, and I think the human intervention aspect is quite an important one, and that's the point that was being made by Richard Barclay back then is when shall we intervene? If you can code it up, it was already coded in the algo, so it could well be that we have fewer people spending fewer hours, but one people have to write the codes, right? Or unless you expect the robots to also write the code, which would be kind of an interesting thing to see how it maps out but someone has to be supervising in case it goes completely mad. So, yeah, some people might be out of a job, but I think it will be a natural and gradual thing as people retire. You know, imagine back in the day when you didn't have Excel to do accountancy or do the numbers. Right now, you know. So the answer is well, I think there's space for us.

Jim Jockle:

My colleagues are going to be very happy to hear that. So, in terms of let's look to the future, what kind of skills should quants have, or aspiring quants and financial professionals have, around AI, you know, to be successful today, or even enter the industry, you know, as they're emerging from academics.

Alvaro Cartea:

Yeah, I think it's a wonderful question because it's you know, I'm always in favor of helping people who are making decisions, especially at a very young age. So when I was trained as a financial mathematician, let's say, the beauty of what you were doing relied mostly on stochastic calculus, stochastic analysis, partial differential equations. So I'm talking 25 years ago and the elegance of producing equations that would have close well, the problems that you could solve in close form. So you had an equation and used it very well. That was wonderful and that's how the industry was run many years ago.

Alvaro Cartea:

As time has evolved, we have moved away quite a bit from that elegance. We have moved away from some of those techniques more into data science, getting your hands dirty with the data, roll your sleeves up and learn more about statistics, learn more about neural networks and machine learning. So the skill set has changed. I don't think the brain power that you need is different, so you just need to train it differently. And you can see it in all the programs in mathematical finance there in New York, in Chicago, here in Oxford, in London, there in New York, in Chicago, here in Oxford, in London there's been a great shift on the emphasis of what's done and statistics has become kind of more of a dominant aspect in all of these courses.

Alvaro Cartea:

Now, what is a trade-off? Before we had elegance and interpretability. So before, when you wrote all these equations down there, you had closed-form solutions. You knew how to interpret results. The trade-off is that now you might be doing things but you can't really interpret very well what the black of either one, and then the challenge will be okay. Now you've learned all these statistics and all of this machine learning and all of this AI. Can you tell me why are things happening? That's a question that we'll face, that we could answer in our time, because the equations would speak to you, and now, with the black boxes and so many variables and these deep learnings and multi-layers and so on, it will be much more difficult. So that's gradually changing and I think the difficulty is on us to retrain and relearn or learn things from scratch that we never thought were as important.

Jim Jockle:

But you talk about the elegance and I think of, internally, our own roadmap. We're always making enhancements to local Stochastic VAL models or Bergomi models. There's always an extension required from the market as it relates to that elegance. But is the elegance at risk of being lost in a shifting syllabus?

Alvaro Cartea:

No, I don't think so. I think anybody who's going to be learning about this will be taught the Black-Scholes equation and how beautiful it is, the hedging argument and how it connects to the heat equation and the closed-form solution, because that gives you a stepping stone to understand how things work. It's like a map when you have equations. It's like having a map that can take you from A to B and you know why you got there or if you took longer or you took shorter or if you deviated. You know what's going on.

Alvaro Cartea:

When you start extending these models and plugging in kind of more powerful ways of learning what the market is doing and how to extend the model in ways that you need to do to remain competitive in the market and you go into more of the machine learning aspects of them, then you lose that sort of elegance in terms of not elegance, but the roadmap is not as clear.

Alvaro Cartea:

You know you go from A to B, but it's very difficult to say, well, it's because I took a shortcut or because this thing helped me up the hill. Some things will be much more difficult or you will not know how your results would change if something slightly different happened that you hadn't taken into account, whereas before you could, but of course, before you were in a straight jacket, before you had a map that put you in a straight jacket and it was very difficult to stray away and that's why it was easy to say well, I know what's going on. The moment you have you, can meander around the world in different ways. Then it becomes a lot more difficult to pinpoint exactly what's causing what or what is connected to what.

Jim Jockle:

A final question I have for you is you know we've talked, you know, specifically around trading, around risk From a systemic risk perspective. You know, what role do you see AI being used in terms of navigating future market crises or unexpected black swan events or things of that nature?

Alvaro Cartea:

That's a great question and I get a lot of people asking me that so clearly there's a lot of effort being put into saying is the market about to kind of break down? Is the market going into disarray, about to kind of break down? Is the market going into disarray? Can I use all of this data and compute power to at least know that I'm entering into something which is, and people are working at that, and hopefully that will allow us to sort of soften a little bit this crisis. Soften a little bit, you know shocks that are coming and the way they sort of rip through the market. So that's the positive and optimistic aspect of it. But if we go back to my initial point about some of these boxes, learn how things are interconnected, you may also think that these crashes or weird things whether they're short-lived or long-lived, it doesn't matter but that these weird things, they may also be caused by some of these black boxes. So it may well be that we end up seeing black swans, pink swans and blue swans, some of them sort of painted by boxes that found a way to sort of shift the market or give it a punch in a way that the no moral compass black box thinks is the best way to achieve the objective you set to the box.

Jim Jockle:

Well, this has definitely been a very colorful conversation. So sadly, we've reached the final question of the podcast. We call it the trend drop. It's like a desert island question. So, if excuse, so, if you could only watch or track one trend in AI and capital markets, what would it be? I?

Alvaro Cartea:

would love to see how these deep learning and reinforcement models impact the market, and also language models. How will the markets become more efficient, less efficient? And I don't think we're going to you know. Within five, ten years' time, we should know how the film unraveled.

Jim Jockle:

Well, I want to thank you so much for your time Fascinating conversation. Thank you so much.

Alvaro Cartea:

Well, great talking to you, Jim. I look forward to talking to you again in the future.

Jim Jockle:

Absolutely. Thanks so much for listening to today's episode and if you're enjoying Trading Tomorrow, navigating trends in capital markets, be sure to like, subscribe and share and we'll see you next time.