Trading Tomorrow - Navigating Trends in Capital Markets
Welcome to the fascinating world of 'Trading Tomorrow - Navigating Trends in Capital Markets,' where finance, cutting-edge technology, and foresight intersect. In each episode, we embark on a journey to unravel the latest trends propelling the finance industry into the future. Join us as we dissect how technological advancements and market trends unite, shaping the strategies that businesses, investors, and financial experts rely on.
From the inner workings of AI and ML to the transformative power of blockchain technology, our host, James Jockle of Numerix, will guide you through captivating conversations with visionaries who are not only observing the future but actively shaping it.
Trading Tomorrow - Navigating Trends in Capital Markets
The Impact of AI on Capital Markets with Bin Ren
Unlock the future of finance with Bin Ren, Founder & CEO of SigTech, as he reveals the transformative potential of AI in capital markets. Discover how AI is revolutionizing financial decision-making processes by enhancing productivity tools for professionals in investment management, trading, and risk management. Learn about the critical role of a robust data foundation in building AI-driven systems and the intricate stages of pre-training and post-training large language models. Bin shares practical examples to illustrate how AI can swiftly process and summarize complex information, potentially altering how financial decisions are made.
Welcome to Trading Tomorrow Navigating Trends in Capital Markets the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. In today's episode, we're exploring one of the most transformative forces in finance artificial intelligence. Ai is reshaping capital markets, with buy-side and sell-side firms adopting innovative technologies to gain an edge. But with all this innovation comes the need for a robust data foundation. Today, we're going to explore how AI is not only boosting productivity, but also changing the roles of financial professionals and what the future holds as AI becomes more deeply integrated into the financial landscape.
Speaker 1:To help us navigate this topic, we're joined by Bin Ren, founder and CEO of SigTech, a company at the forefront of AI innovation in finance. Before starting SigTech in 2019, bin was the chief investment officer of the systemic investment group at Brevin Howard, where he led the quantitative investment funds. He began his career at Barclays as an equity exotics trader. Welcome, ben. Thank you for having me today, jim, absolutely Pleasure. So you know what. Just to start off, why don't we talk about SigTech?
Speaker 2:So SigTech we have been around since 2019, so over five years. We spun up from a hedge fund called Brevin Howard and we really focus on building and shipping the best productivity tools for people who work in the front office of the capital markets. So our users tend to make very important financial decisions, such as investment management or trading or risk management. So in their daily life, the speed and quality of the decisions matter. So that's why we want to build the best tools to help them.
Speaker 1:And what is the key role of AI in the platform?
Speaker 2:So we started five years ago.
Speaker 2:So a big part of our product is to help people to do data-driven analysis.
Speaker 2:So in finance, it's very much about numbers, their time series, but also textual data you're getting from different sources on a daily basis.
Speaker 2:So AI has really helped us in the last two years to lower the hurdle in terms of how much knowledge a user has to have in terms of programming and data analysis and to actually do what used to be the specialist job of a data analyst.
Speaker 2:So I think AI has made a huge difference in the last two years to be able to write very high-quality code on behalf of a user and be able to actually process a lot of the fairly complicated so-called natural language processing problems. For example, Jay Powell just gave a speech at the Jackson Hole and talking about making it actually very important and saying central banks are not ready to cut rates, and what happened is that actually the speech was published while he started speaking and our users are able to immediately, through large language models, train and fine-tune by us to be able to ask questions like summarize a speech for me right now, focus on potential for rate cut in five seconds, even way before the speech is finished. I think this is a sort of one of the many interesting use cases that AI has been able to deliver.
Speaker 1:I guess gone are the days of everybody huddling around CNBC and the floor going quiet. That's amazing. So you know the AI product is backed by robust data and you mentioned that you know. Perhaps you can explain the importance of having strong data foundation, especially when building AI-driven systems.
Speaker 2:I think if we think about how the launch language models work, I think there are multiple stages of training. Okay, there's the first stage. Actually, actually it's called the pre-training. Pre-training is where we have a huge corpus of data we're talking about trillions of so-called tokens covering as many different domains as possible to train a very large neural network to essentially become a very competent and knowledgeable generalist. And then there's the second stage, which is called post-training is to align this large language model with human objectives. So when we interact with the models, we want the model to respond to us in a certain way that aligns with our style of conversation, of our way of thinking, rather than just predicting the next token, which is what the pre-training is about. So in those two stages, clearly the input, which is the data, is absolutely essential.
Speaker 2:If anything, I would say that the architecture of the neural networks are very well known. Everybody is using roughly the same architecture the bigger one, there are smaller ones, but roughly the same same family. So really the difference in terms of performance really comes down to three things which is the size of the model and then the amount of compute that you have to use to train the models. Actually, also, the larger the model requires more compute. And then there's, finally, which is the data and how much data you have. It's like how much knowledge you can actually gather and clean to train the AI, and that's how we just get to the large language models. But to build applications on top of it, we have to again provide domain-specific data, such as financial markets, like news or documents or research, but then we're talking about time series. So everything, frankly, is about data. Ai model is the engine, but everything else is it's really about data. Data is the field, data is the foundation, everything really you know it's funny, you make me think about.
Speaker 1:You know ongoing training, right. So you know, back back in the old days and I'm an old guy um, you know when and I started out in municipal finance and when we had issues in the market. You know we would go back to the New York City bankruptcy of 1978 or the Texas mortgage defaults in history to get a better understanding of how markets are going to perform today in a particular crisis. Given the dynamic of the markets, the volatility, the speed in which markets are training, how are these models getting updated? Are they just constantly learning through data input and data flow on a regular basis and the interaction, or how do you continuously?
Speaker 2:train? That's a great question. So, again, if we go back to kind of the stages of training, so the pre-training, there's a cut-off time. So if anyone had used ChaiGBT, you would remember the cut-off time originally was like November 2022 and then updated to like June 2023. So the pre-training corpus of knowledge has a cutoff time.
Speaker 2:But what's going on is these days the AI companies, they do not just do one kind of one-time pre-training anymore, they are actually continuously doing pre-training, just with different checkpoints. Every day For example, today's training, or every hour or something they will have a checkpoint of the entire model. So they are generating version-controlled large language models continuously and they will be updating this corpus of data for training on a, say, maybe monthly basis. Because, to be honest, maybe there are a lot of news, but the fundamental knowledge doesn't grow on a daily basis. So they probably can update it on a weekly or monthly basis. But even given, for example, when we use a large language model with a cutoff time of six months ago, what we can do is because the large language model also has a context length. So these days a model has it's like the memory, it's like the size of the memory.
Speaker 2:So today open-air models have something like 128K tokens, which is quite big.
Speaker 2:It's not that big but it's decent.
Speaker 2:It can put quite a few research papers into it, like the short-term memory, so you can actually just get the latest data you're interested in, put it into the context window, which is equivalent to the short-term memory, and ask the large language model to do inference, combining the pre-trained knowledge and the in-context knowledge.
Speaker 2:So that's how people do it. And then there's another way which is actually very important for finance, which is the tool calling, which is a lot of knowledge can be only accessible behind the API services. It could be some calculation of time series, could be like some more complicated like data crunching, say generating a risk report. So what happened is the system keep running. So we're not making all the IT system obsolete. But you can say, hey, when we ask about certain specific problems, instead of trying to figure out yourself, which is impossible by yourself I mean the large language models try calling this API on the fly and fetch the result returned by that service and use it as part of the response. So in that case, response is also always up to and in terms of prompts.
Speaker 1:Right, I always think about, you know, the layperson. You know, myself included in that regard. Everyone I meet has different levels of prompting skills, some very good, some absolutely horrible. You know, how do you manage prompting. Is it an education issue? Are you doing some secret prompting in the behind so your front office users can ask some really dumb questions and get meaningful answers? How are you dealing with that?
Speaker 2:Yeah, that's such a great question, jim. There are two parts to it. The first part is we do a lot of prompting in terms of the system instructions, because when we build the AI agents to specialize in different parts of the capital markets, we have to make sure that specialist behaves like a specialist. So we use very elaborate system instructions to make sure that they do that. And actually, you know, people may think that prompting is just a few sentences, but actually in some of the more elaborate AI agents we built, the prompts can be several pages. There are a structure to the prompts, there's a description, there's an objective, there are styles and then we have to give examples, both positive and negative examples. It's pretty comprehensive. It's less like a prompt. There are styles and then we have to give examples, both positive and negative examples. It's pretty comprehensive. It's less like a prompt, it's more like a little mini course to behave this way. It's more like an agent handbook, follow this handbook and then the user has to prompt. And I think that's actually very underappreciated, because the one thing we have seen in that the AI system is actually very ironic. The AI system is supposed to be quite user-friendly because the user interface is just having a chat. It cannot be simpler than having a chat.
Speaker 2:But what happens is people sometimes because before large language models, the way we think about software is entirely deterministic. You know, we click a button, we know exactly what that button is supposed to do. We click it. It does the same thing over and over again. If we click it once it doesn't work, we think it's broken. So that's our intuition about software. But with large language models it's non-deterministic, it's statistical.
Speaker 2:You ask the same question, you can get slightly different answers, even if the gist of the response are the same. But when people speak to the large language model, they sort of get confused. They're like okay, should I try to be clever in my question? Should I be more specific or be more open-ended? Or sometimes they just intuitively try to test the intelligence of the system. It's like, okay, let me ask the most difficult question I can think of, just to see what happens. And then so we tell them that, look, I think you know this is a tool.
Speaker 2:So think about the question you would ask a human analyst or human junior analyst, so that you can get something useful and productive out of that conversation. Don't try to make it difficult. You know you're not here trying to embarrass the system. The system will to make it difficult. You're not here to try to embarrass the system. The system will get clever over time, but the objective here is to try to get something productive and useful out of this conversation. So, for example, in general, just use common sense. The more specific you ask, the more specific the questions are. If you assign a task to your colleague, you're not going to ask some very open-ended philosophical question because most likely the colleague will ask what do you mean exactly right? So I think those are just kind of the good practice to follow when we use large language models-powered application. Just use common sense. A better, more specific question will get us a better, more specific answer.
Speaker 1:That's how my staff responds to my emails when I give them a task. What do you mean, jim? Come on, you do make me think of one question. Right, and you said many of the architectures are well known. They're the same. You know, data is the fuel, right, but what is the differentiator? What makes one AI system proprietary to another? Is it the training? Is it the data? You know, because a lot of the technology is open source. So where's the magic.
Speaker 2:The magic, I think. If the audience wants to know where is the magic, I recommend reading the latest technical report by Meta, when they released Lama 3.1 open source model. So there's this 200 or 100, 200 page paper describing exactly how they did it. The magic is there's no magic, you just have to do it at a huge, huge scale. We're talking about, you know, like 30 trillion tokens. We're talking about like 33,000 GPUs in a data center and the data center gets overheating and every day the GPU falls over.
Speaker 2:It's more like an engineering problem now than in terms of the neural network architecture, than a research problem. So it's literally just like building it bigger and bigger. So people are not talking about. You know, up until now, the biggest data center is about 100 or 200 megawatts. And people are thinking about OK, how do we build a gigawatt on data center? Where do we get the power? How do we do the cooling? It's like, you know, 5x the capacity does not mean 5x the complexity. It's probably much higher than that. So I think now we're just thinking about it's more of an engineering problem.
Speaker 1:Well, I would also argue it's three words small fusion reactors, Absolutely. It's certainly a very hard topic. So let's stay on magic for a second. So in August, SigTech introduced a new AI-backed product called Magic. Can you tell us a little bit more about what your?
Speaker 2:magic does Our magic coincidentally stands for multi-agent generative investment co-pilot. So it's our application, which is made up of a team of AI agents. Each agent is a specialist in a specific domain of the financial market. So we have an agent analyzing, say, central bank's statements, press conferences and speeches. We have an agent analyzing the equity market, while analyzing the microeconomic indicators. One is like a quant strategist that can turn your trading or investment ideas into Python code. Do the backtesting, give you the results on the fly.
Speaker 2:So we build all these different agents each doing specific things, and try to make sure that they do it well. And then what happens is the user is like having a group chat with this team of AI experts and you ask a question, and then they will first come up with a plan. They will say, oh, to answer your question, Jim, we need a plan which is made up of 12 steps. Step one this agent is assigned to do it. And step two, maybe a different agent is assigned to do it. So, step by step, it breaks down your question, your problem, and then each agent or the expert is assigned to the right one and they work in collaboration and then synthesize the entire team output and then give it to you.
Speaker 2:So this is the product. Can the agents talk to each other? Yeah, agents talk to each other. I think what we managed to figure out are two things. One is how to build these specialist agents in finance, and the second thing we figured out is how to orchestrate, aka how to manage your AI team. How to manage your team? How can they pass information back and forth, what kind of a context they have to share? How do you assign the right agent to the right task? So that's what SIG, Tech Magic, is about.
Speaker 1:Wow, that's amazing. And have the agents started their own language yet, or taken it over the world, or building their own portfolios, or anything weird happen.
Speaker 2:I think today, given they are powered by the current large language models probably not AGI yet. So we can't just ask a question like oh, make me a billion dollars, do it now. But it's certainly proving to be very, first, very versatile. And the second, it's become very useful because we're actually quite surprised by some of the output actually, by some of the output actually because, again, because the whole system now is sort of a you know, you can see, I can visualize the problem space the team can solve grows every time we add a new agent or the agents become better. So sometimes we get surprised by the response. For example, you know we can't test all the use cases, but I remember recently, when we're doing a live demo, the customer just said example, we can't test all the use cases, but I remember recently, when we were doing a live demo, the customer just said okay, can you show me that?
Speaker 2:Whenever J-PAL said some phrase in a press conference in the last five years, how did the US treasury market behave in the next week? I've never tried it before. Literally we're just typing whatever the customer said on the call and it just worked right. You know they're the agent analyzing, the Fed speak and the different agents get hold of the timestamp of the speech and figure out what's the right week to fetch the numbers, and everything just kind of worked. Right week to fetch the numbers and everything just kind of worked. So we sometimes surprise ourselves. But the beautiful thing is, I think, every time there's a new model coming along, if there's a big jump in the model capabilities, all these agents almost automatically will become smarter.
Speaker 1:So you know, obviously these agents are finishing tasks in seconds as compared to humans who have got to sit and read transcripts and overlay. You know deep thinking and deal with disruptions of email and phone calls, and you know some colleague needing some sugar for his coffee. But you know, perhaps you could elaborate on this vision and how these agents could be a game changer for financial institutions.
Speaker 2:Yeah, I think. First, we do not believe that the AI agents will replace front office users. We simply don't think that will happen, certainly not in the near term. Because I think, you know, because financial services are heavily regulated and so the regulators demand that there's a responsible person for the job, so human always has to be in the loop. I guess a good analogy would be despite how little commercial pilot actually flies the plane, we still have two. We still have two, we still have two. And, if anything, if you look at the evolution of the airplane navigation technology over the decades, as the technology gets better, instead of having fewer pilots, we have more pilots because the cost of flying has come down, so the scale of the airline business goes up and actually the pie has become bigger. There are more humans in the loop, more people are employed, we fly more customers, even if the per-person cost has come down. So it's quite interesting. I think we'll probably see something similar in finance, because you know we hire very smart people in the industry.
Speaker 2:Who actually wants to sit there on a Friday, spend six hours reading through all the speeches given by Jay Park in the last five years? I mean, is that interesting? No, it's not really interesting. Is that intellectually stimulating? Probably not. Do you want to type into a Google Doc or Microsoft Word or Samurai which will write over those speeches? Probably not. So I think a lot of these essential but repetitive, boring tasks can certainly be automated. That gives people more room for more creative, deeper and more interesting jobs. I think we probably didn't have the bandwidth to even think about, or even do. I think that will open up new possibilities.
Speaker 1:Well, that's definitely going to change the role of an analyst job. Coming out of college, you know one of the things, and you'd mentioned highly regulated industries. You know financial institutions, obviously banking much more higher regulated than, say, the buy side. You know, and traditionally there's always been, say, the buy side. You know, and traditionally there's always been an adoption curve for the buy side greater than the sell side. There just seems to be a lot slower. I mean, how do you see, are you seeing that tradition of buy side being more advanced than sell-side continue to play out, or are both sides of the fence, if you will, adopting AI at a similar speed?
Speaker 2:I think the smaller buy-side tends to have much smaller institutions and smaller institutions do tend to move faster. And on the buying side the adoption is certainly much faster than the sell side, because a buying side is normally under a lot of pressure to beat the market, generate returns. It's competitive. If they don't perform on a quarterly basis, the investor may pull the money. It's just a lot more competitive pressure on the buy side, so they are more open-minded and therefore they're also quite keen to look into anything they give them.
Speaker 2:On the sell side, the nature of the business is you are providing a service, right, so the competitive nature exists, but it's much less brutal.
Speaker 2:So I think the catalyst for adopting technology like AI is more coming from operational efficiencies. Because of banking, I didn't day investors are asking about return on equity, your profit margins, so it's more of a business performance level metrics. So something like AI can help you with the cost control. I think, especially when, for example, in a down cycle, budget gets cut but people still need to deliver the same amount of output, you know there are not many other alternative knobs you can turn to actually make that happen. People can work harder, but how much harder can people work. So AI technology may give that boost, whereas in upcycle, in bull market, what tends to happen is the banking industry can't hire enough people to do the business, so it's the cyclical nature of the business versus how expensive the human capitals are that does suggest that AI technology can be one of the very few things that can actually modulate this mismatch between how much you need and how much cost you have to monitor. So, I think, just very different dynamics.
Speaker 1:You know we touched on regulation, being highly regulated. You know the concept of multi-agent type systems seems to be the path forward, right? A lot of conversations I have you know it's no longer just this one big self-service large language model, it's specialized, vertical multi-agents that can interact. What challenges in that model? Is that going to overlay? Because it seems like a lot of regulators still haven't even figured out the monolithic large language models. But now you're dealing with a whole series of them. Is this going to even curtail, perhaps, adoption further, even though the systems are smarter?
Speaker 2:This is super interesting, I think. First let me describe why the multi-agent architecture is the future. I do believe it's the future. I think actually divide and conquer in this case is certainly the way to go, because it's like a modular design for software. Nobody is going to write the most complicated system in one file with like 1 million lines of C code. You have to break it into modules. Each one does a specific thing so you can test it.
Speaker 2:So if we have this one giant or large-language model that does everything, it means A. This model has to be very big so that it can be an expert in so many different domains. And then this model has to be highly advanced because now it has to learn, for example, how to use 10,000 different tools. Give them a question how to select the right tools out of, say, 10 potential tools to use and how to use them in the right order, which is very complicated because the complexity is like it goes up quadratically. And then I think, and also because it's one lab, the bigger the model, the more difficult to interpret the output, because basically the black box has just got bigger. And then the last bit is like the bigger the model and it becomes substantially more expensive for the inference to run, the inference becomes more expensive and the latency becomes much higher. So it just doesn't scale very well, whereas when we break it down into specialists, each can be run by a much smaller model. So it's faster, it's cheaper, it can test it faster, it's cheaper, it can test it and you can actually have more possibility in terms of explaining how each one of them is doing.
Speaker 2:And what we do here at Sigtek is that we actually keep track of all the interactions among these agents. So when I ask a question and I can see my team working on it, we track everything. We track all the tasks they work on. We track all the actions each agent takes to work on each task. We track all the contacts, the conversations they have among themselves. With every output there's a whole graph of team interactions all recorded. We can actually go back and review and rewind and see how the decisions are arrived by different agents at different times and how they come together. So that actually offers some more visibility into into um what happened all right, you just gave me 15 more questions.
Speaker 1:Um so and my producer is going to be yelling at me, but I always like to argue everything behind everything new is something old right? So you know, arguably we're talking the same basic tenants of a microservice type architecture, except with agents. But you know, within that microservice architecture it's, you know, it's easy to find fail points With agents. To what extent can you determine if one's hallucinating or not? And then a second part of that question, as it relates to your monitoring and looking at the decision-making, as arguably a lot of professionals right now in AI are more let's call it IT developer-based, how much domain knowledge overlay is critical for that software engineer or professional to be able to understand that decision-making, especially within a specialized agent?
Speaker 2:Yes, it's fascinating. I think hallucination is certainly one of the most important topics for large language models. I do personally find the word hallucination a bit unfortunate because I feel like the word suggests something that's different from what actually happens. What happens with so-called hallucination with large language models is that I ask a question. It happens that the large language model is not equipped to answer my question, but it's under pressure to give me a response because I'm trying to have a conversation here. So it's under the pressure to say, hey, let me give it something by that at least sounds or looks coherent and logical. So it's trying its best to come up with a response with its limited knowledge base. So that's basically what happens. If, to be frank, I feel like hallucination is not a right word for this, we actually have a perfect word for this. It's called bullshit. That's the perfect word. If you ask me something I don't know, I feel like I'm under the pressure to give you something. I have to say something. I'm just going to try my best to BS. Sometimes I do it better, sometimes I do it worse, but that's how I give it a go. Sometimes you catch me, sometimes you don't. But that's exactly what happened with large, with large language mode.
Speaker 2:I think there are a few things here we can do, and we have done that. Um, the one is, um, very basic. You know, in the prompt you can say hey, if you don't know what to do or what to say, don't try to respond. Instead, ask me, ask me for clarification. It's like say, it's like you can change the system, instructions to give this guideline so that that can help with some cases. And then the second thing we can do is so-called grounding. So grounding is like pretend I'm the large language model. You ask me a question, doesn't matter whether I know the answer or not. From my pre-training.
Speaker 2:Every question you ask me, jim, we are going to do a semantic search in an expert database to fetch some relevant documents or words or paragraphs as the context. So whenever I answer your questions, I'm always given a list of references. So therefore, that extra contact retrieved from an expert database is going to ground me, so that even if I know the extra context cannot be harmful. If I don't know, that extra contact can be the difference between me giving you something completely made up or something more actually useful. So, and the fourth is, I think the users, especially in financial services. They want to be able to decide whether I can trust the output of large language models.
Speaker 2:So citation or references are a very important part of the response. So we have tuned our models to always try to provide comprehensive references, citations, wherever it's possible, so the user can really do some verification themselves if they want. I think those are the steps. I think the hallucination in general the better the model, the bigger the model, the more kind of trillions of tokens are used in the pre-training, the less likely, the less ignorance, so to speak, in a large-language model. Therefore there's less hallucination. But there are other things we talk about that can be done to improve it.
Speaker 1:Well, it's good to know that, whether it's human or computer, now I still have to deal with bullshit. So sadly, we've reached the final question of the pod, which we call the trend drop. It's like a desert island question, and if you could only watch or track one trend in AI and financial services, what would it be?
Speaker 2:I think in AI, I think for me the most important thing is the so-called scaling law. So what happened in the last three, four years? Different companies have, all of them have agreed on the so-called scaling law, which is this interesting graph that says, given this graph, it's like almost like everybody can fit their all trained models on this graph. So it's like a collectively decided scaling law. It's like a graph. The graph basically says if we scale the compute, the model size and the input training data size by this amount, how the performance of the large language model will be. So we actually know today, for example, we can say hey, tell me that if we scale the compute by 10 times, input data by six times, model size by five times, that's optimal. There's a relative ratio to these parameters. Where that performance? You know what does the performance look like? We actually know, okay, we actually know.
Speaker 2:We know this graph and so far, all the models delivered by different companies all follow this. We saw it just go up and up and up and they follow this almost exactly. It's actually more interesting, probably more fundamental than, for example, the Moore's Law. Moore's Law is more of an observation of the productivity of the semiconductor industry, but this one actually is more mathematical, it's more fundamental. I would be absolutely interested in observing in the coming months, quarters and years whether the models released by different companies can still follow this scaling law. Are we going to hit a plateau, hit a ceiling, or actually it's going to accelerate more? So that's, I think, the one thing I really want to track in AI.
Speaker 1:Well, two words quantum computing. Quantum computing yes.
Speaker 2:Maybe the exactly and in terms of financial services, I think the one thing I would be interested in would be, I think, the younger generation.
Speaker 2:If I think about the relationship between more intimate relationship with the bank, because they go to the bank branches, they speak to people face-to-face, they worry about oh, how do I plan this, plan that where the younger generation, we no longer have that relationship with the bank. Everything we do is through maybe an application on my iPhone or on a website. It's entirely digital. So I think all the financial services is already very much, over the years, transitioning from this in-person, relationship-driven business model to a much larger scale digital transformation, digital distribution, especially given these days, a lot of the financial services and products, arguably, are quite fungible, similar. So it's very much about distribution. And then, today, digital distribution. So I think about digital banking, I think about digital distribution, I think about digital banking, I think about digital distribution, I think about digital wealth advisory and how AI can play different roles in those places. So, yeah, those would be the things that would be most interesting.
Speaker 1:Well, I'd be happy to stay out of the branch, but TD hasn't figured out how to give me a free pen and a lollipop via the app, so maybe that's where the drones come in. I go to the app and they drop it off. All right, ben, thank you so much for your time. What a great conversation. I enjoyed it. I know our audience has, so thank you so much. Thank you, jim, it's a pleasure. Thanks so much for listening to today's episode and if you're enjoying Trading Tomorrow, navigating trends and capital markets, be sure to like, subscribe and share, and we'll see you next time.