Speaking to Legends
Speaking to Legends
#5 Campbell Harvey - Think Big and Pursue Big Ideas
Professor Campbell Harvey is a Professor of Finance at Duke University. He has 70 thousand citations and is one of the most recognised researchers in finance. Professor Harvey spent over 33 years in academia, published over 125 papers and advised some of the largest and most influential financial institutions. In our latest episode, he shares stories and lessons he learned while studying under several Nobel laureates. He also explains why he claims that over 50% of claimed research findings in financial economics are likely false and what drives him as a scholar. Furthermore, he expands on the differences in incentives between the industry and academia and what is the likely path to recovery from the COVID-19 crisis.
For links, resources and book recommendations by Prof Harvey, see our episode show notes on our website.
We value your feedback and we would appreciate any suggestions for improvement. Please take our survey.
Hello and welcome. Speaking to Legends podcast. This show is a quest for ideas, insights and stories from the lives of the most successful hedge fund managers. We're learning about their spectacular careers, we share their life lessons, and dissect their investment techniques.
0:20
The legend of today is Professor
0:22
Campbell Harvey. He's a professor at Duke University, and with 70,000 citations, he is one of the most well recognised researchers in finance. Professor Harvey spent over 33 years in academia, published 125 papers, and advised some of the largest and most important financial institutions. In this interview he shares stories and lessons learned while studying under several future Nobel laureates, why he claims that over 50% of claim the research findings in the financial economics likely false what drives him as a scholar. Drastic differences in incentives between the industry and academia, and what is the likely path to recovery from the COVID-19 crisis? Let's get right into. Hello, Professor Campbell, Harvey. Really excited to have you on the show. Welcome.
1:18
It's great to be on the show.
1:20
Let's begin by learning more about your background. Where did you grow up? And when did finance catch your interest?
1:27
Well, I grew up in Canada. I'm a native of Toronto. And it wasn't clear when I was doing , my undergrad, what I really wanted to do, I did a fairly general sort of study, there was history, English, mathematics, some economics, no business courses whatsoever, like zero. And it really wasn't until I started doing my MBA that I got interested in finance. And there were two things that happened to me. Number one, I started to do research assistantship work. So actually getting into some research that some professors were working on. And I found that really very interesting. Indeed, at some point, I was thinking that, gee, maybe I can even do this because I'm making suggestions for them, that they're actually incorporating into their research paper. But the big thing for me was a summer job that I had an internship after my first year of MBA, so I'd taken the usual finance courses. I had some economics both from undergrad and from my Masters, but I went into a job in the corporate development area of a company that no longer exists, but its name was Falconbridge Nickel, and it was one of the largest, if not the largest nickel producers in the world. And my job in that corporate development office was to develop a model that would help them forecast what was going to happen and in terms of GDP. So that's really important for nickel and copper, because we know that they are extremely procyclical and have any sense of what's going to happen in the future is valuable, because you know, when to turn up production turned down production, when to start new mines or our hult production at mines. So this was a really important thing. And I started basically reviewing the academic research, and much of that research was focused on looking at the stock market, predicting what would happen in the economy, and the evidence was not that strong. So stock market can go up and down for no apparent reason and give a lot of false signals about economic weakness. So I started looking at other indicators, I went through the usual list of leading indicators that the US Department of Commerce looked at. But then I had this idea, and it was triggered by a newsletter that crossed my desk and essentially talking about the information in interest rates. And it struck me that interest rates and bonds, of course, are similar, but different than stocks. So if you think of a stock, it's essentially a claim on future income. And the bond is basically the same thing. But they're different in many respects. So if you think of a stock you don't actually know what the horizon is, so think of the dividend cash flows, you don't know how long the company will last. You don't know what the dividends are actually going to be. You don't know the riskiness of the future cash flows, you might have some idea of the risks of the current cash flows. And you put all of those factors together. And it makes for a very kind of unreliable indicator of future activity. But I'm thinking the government bonds are different because number one, you've got a fixed maturity. So that's not uncertain. Number two, you actually have coupons that are deterministic. And then number three, you actually know what the risk is. And that's essentially the default risk of the US government which is close to nil, especially for a nominal bond. So I thought, well, maybe we could look at interest rates and perhaps interest rates of different maturities to get an indicator of future economic activity, because we all know that an interest rate is just a combination of expected real rate and expected inflation. And there's a risk premium also for risky bonds. So that expected real portion should be linked to real activity in the economy. So that's what I started working on. And it was, it was actually really exciting. And I developed this at this company. And then in the middle of the internship, the company eliminated the whole department, everybody got laid off. So I was not too pleased about that, being a summer intern, but it gave me time to continue working on this project. When I got back for the second year of my master's. I showed it to some of my professors. They were really excited about this idea. They made it incredibly easy for me. And that they said, look, you've got three required courses you need to take, let's combine them into one course where you're just going to write a paper. And, that's how I got started. And when that paper was ready and actually wrote a couple of those papers, they said to me, you need to apply for PhD, and you need to use this research paper as part of your application. And then I did that and ended up at the University of Chicago. And I was pretty lucky because I walked in to the University of Chicago with my thesis idea, and most students are not in a position like that, where they have to do a couple years of coursework or maybe even more before they come to an idea.
7:47
Sounds like that internship was quite impactful on your life and kind of almost kick started your career as a scholar and as a researcher. So as you joined the University of Chicago to pursue a PhD in finance and economics, are you had three Nobel laureates on your committee. What impact did that experience have on your career? And any notable stories you want to share?
8:17
So let's be really clear. It is true that I had three Nobel laureates on my committee, but at the time, none of them had the Nobel Prize. But at Chicago, it was really interesting because we knew they were going to win. So we knew that Eugene Fama was going to win just a matter of time. We knew Merton Miller was going to win, it was just a matter of time. Was he going to share with Modigliani or or somebody else and, also, it was really interesting, there was a younger professor who joined from Carnegie Mellon named Hansen, and he wasn't really that much older than we were, but in taking this course, people realised, oh, he's going to win for sure. So you could actually see this I took courses from Gary Becker, who later won the Nobel Prize, Bob Lucas, who later won the Nobel Prize. I missed many that one from Chicago, my mistake, but it was just like an incredible time to be there. The intellectual horsepower was just so overwhelming. And I learned so much, and maybe the most important thing I learned at Chicago was not to kind of pursue smaller ideas. So to be ambitious, and I remember pitching some ideas really early on to my advisor Eugene Fama. And, he basically, and this is not my thesis idea, but some other ideas. And he basically said, you know, these are ideas that you could probably get published, but they're not big ideas. And what you need to do is to think about big ideas. So you want to change the way that people think. So that was very inspiring for me. I did, of course, pursue this Master's paper, and I was in an awkward position given that I was presenting to the entire finance faculty, at least a year before the usual kind of initial presentation. The first time I presented my paper I got, I felt destroyed in the seminar. It was brutal. It's usually 45 minutes for a PhD student, but nobody else was ready. So I had the whole seminar and it was really bad and Merton Miller was particularly critical of what I was doing. And, then after about like an hour I'd finished even though there was a half hour left. And I remember my advisor, saying, well, you don't like the theoretical foundation of what Cam is doing. Do you have a better model? And it was magical. I just sat back and there was a discussion and, and basically, a framework was outlined. And then I went back to work after getting this feedback and worked pretty hard on the framework that was suggested. And then again, this turned out to be by dissertation, where I showed that inverted yield curves so when the long term interest rate goes below the short term interest rate that is predictive of economic growth and recessions. And I had empirical work that was very intriguing at the time. And I remember presenting it were from 1960s to the 1980s. Every single time, the yield curve inverted, there was a recession. And some people on my committee and amongst the faculty were sceptical as well. You've got like a four out of four kind of fit here within the sample. And four observations is not really a lot. So this could just be a lucky finding. But then others said, Well, that's true, any finding could be lucky. However, the economic foundation is pretty solid. If you think about the real rate, is telling us something about real economic growth. So if you look at a longer term, right and the shorter term rate, and the longer term rate goes below the short term rate, then that doesn't bode well for economic growth. So the theoretical foundation was good. In addition, the empirical work wasn't just four out of four. My model got the double dip recession of the early 1980s. So it forecasted the recession, and then a short recession then a recovery, then another recession, and no kind of commercial model at the time, was able to get that correct. So people actually were impressed with that. And then I did some further analysis where I collected these commercial forecasts, and at the time to get a forecast of GDP was very expensive. So there were these companies that specialised in estimating these large econometric models with hundreds of equations and thousands of variables. And you would pay for it and it could be, you know, $10,000-15,000 to subscribe to get their actual forecast. So I looked at the history of their forecasting record, and put it against my model, which was a single variable. There's no hundreds of equations. It's a single equation that simply looks at the slope of the yield curve. So I did that. And my model did really well. So it was as good, if not better than these commercial models. So again, my committee was kind of impressed. And it's kind of a Chicago sort of thing where, so you pay $15,000 a year for a forecast, or you could pay 25 cents for at the time of Wall Street Journal, and pull out these interest rates and get a forecast of equal quality. So in the end, I graduated and it's kind of interesting that usually what happens in scientific publishing, my thesis was dated 1986 was published in 1988. And I did a number of extensions. But usually what happens after you publish is that the effect that you document gets either weaker, or it goes away. And that wasn't the case with my model. So, for example, in October 1987, when the stock market crashed, there was widespread belief amongst economists that the US was going into recession in 1988. And I think the average surveyed forecast was for negative growth. And I looked at my model, and given that I just graduated, I had to stand by the model and said, well, the yield curve model says the growth will be 4.2% in 1988, and that was just way different than the consensus way different and it turns out the growth was more than 4% in 1988. So this model was giving a forecast about like a non recession that was valuable. And then the next three recessions were accurately forecast with the model. So when seven out of seven, no false signals out of sample, and then in 2019, in June 30th, actually, we had a full quarter where long term rates were below short term rates, given the track record seven out of seven, you need to take this carefully. It's saying there's gonna be a recession in 2020. And obviously, we're in recession in 2020. To be clear, the yield curve didn't forecast the COVID-19 pandemic.
16:53
But nevertheless, I guess we'll never know. I think the counterfactual is what would have happened if there was no pandemic But at the time and late 2019, there was widespread belief there was going to be a slowdown in 2020, our survey of CFOs in the US, over 400 of them over 50% believed there'll be a recession in 2020. And if you add in 21, the first quarter about 80%. So I think we're going into a recession, but nevertheless, we're in a recession right now. That's, really unprecedented. And in terms of the speed to which we kind of like drop off the cliff. So that's my story from Chicago and I went on to Duke and kind of began my research career there.
17:44
Thank you for going in so much detail on this and it's indeed sounds like this one indicator model proved to be rather robust even on such a non frequent event as recession. So you're already partly touched upon the scientific rigour that was kind of instilled in you while at Chicago. And as you say, you graduate din 1986. And then the next 33 years you spent at Duke, while being a visiting professor, a couple of more universities, and you published over 120 scientific papers and won many awards for your pioneering work in the field. What do you think motivates your research and drives you as a researcher?
18:30
Most people don't realise that this job is very high stress job, to get into a good Ph. D. programme is just very difficult. So at Duke, there's like 200 applicants and we admit three, and many of the applicants we get are not the applicants that are applying to Chicago or Stanford or Harvard. So it's just really difficult. And then you need to finish, and then if you finish, then you need to be placed and the probability of being placed at the top schools very low. And then once you get to a top school, the probability of getting tenure is very low. So you put all these probabilities together, and it's very, I when I advise people that are thinking of applying for a PhD, I tell them to do the simple math. It's very, very competitive. But let me talk about the nature of the competition because it is intriguing to me. So number one, you are essentially working for yourself. So yes, the dean is your boss nominally. But the dean is not telling you what to do. The research choices, the topics, those are completely up to you. So it's a job that gives you a lot of freedom to choose research topics. The competition actually I didn't really understand this when I initially joined Duke, because I'm thinking, oh, well, I'm competing, like in a company against my fellow workers at Duke. So at a company, you need to shine to get promoted, and not everybody is promoted, and one person goes to the top. So it's completely different in academia. So I quickly realised that the competition was not my colleagues, at Duke, but the rest of the world. And that was kind of energising. So, you are essentially competing for ideas. And your competition is every other academic in the world. So that is very difficult to actually pull it off. So that is very energising and what's the goal here for me personally, then for really, any academic, we really, like enjoy what we do. It's really interesting. It's fun to do this competition, and we're hunting for the big idea. So it's relatively easy to pursue small ideas. But in order to really impact the practice of management, the practice of financial management, you need a big idea. So you need something that will change the world, hopefully in a positive way. So that's really interesting. And again, energising that you've got a chance of actually changing the way that people do things, and potentially making markets more effective, the economy, growth on a better path, things like that. And it's it's rare that you can do that in kind of a non academic job. It's not impossible, just more scarce. So that's kind of what drives me that it's really interesting what I do. And it's long hours, people think that oh, well, these academics only teach a certain part of the year then they're out on the golf course. No, no, no. When I was a junior faculty member, the amount of work was just enormous. I thought that it would be less than when I was a PhD student. As a PhD student, we had a discipline of leaving the school at midnight, every night, so we worked, you know, more than 12 hours obviously, every day and we kind of forced ourselves to leave at midnight and as a junior faculty member in finance, you work just hours that are Wall Street investment banker hours, there's no difference whatsoever, because you are, number one, likely pursuing some interesting ideas, you're energised about. But number two, there is this competition. And you never know if your idea is going to get scooped by somebody else. So you need to get it out as fast as possible. So that's what really accounts for the long hours that are necessary. But again, I think that this is really the goal is impacting the practice of financial management. And this is something else that I kind of learned after my PhD. I come to Chicago and Chicago is very research oriented and it's not really that friendly to the students. However, I always thought that the quality of instruction was just amazing for me. So I didn't have a single professor that I would complain about after my PhD. Everybody was really, really good. But I thought that the goal here was just to write research papers. And I learned afterwards that that's not just the goal. So you can actually impact the practice of management by your teaching, whether it's your PhD students that you're training, or your masters students that will go out and take leadership positions in firms and do things potentially very positive things. So I kind of learned that the teaching function is something that is often undervalued. And I always, in my teaching, have the philosophy that I want to deliver to my students a vision of the future, I want them to be exposed to the latest research ideas. There's no textbook in my course, anything in the textbook is at least five to 10 or even older in terms of years. Maybe they see some research papers that are not published that might be in the textbook 10 years from now, or they're exposed to my own research ideas where I might not even have a paper written, but I want to give them that vision of the future. I want them to have the edge of knowing the latest information. And my course is not a popularity contest, where there's a lot of work. It's technical, it's quantitative, and I believe that is necessary in this world, to prepare my students for the future, everything's online, of course, and everything is available to anybody. All of my materials are available. I'm a big advocate of sharing materials and, helping other people with their teaching.
26:20
Very inspirational. It's clear that you lead by example and have full and utmost dedication to what you are preaching, and almost fanatical obsession about the work that you are doing this mentality and competing with other researchers, and most importantly, sharing the knowledge and pushing the boundaries of what's out there. And speaking of research, you wrote a fantastic paper, which states that most of the claimed research findings in financial economics are likely false. Can you please in simple terms describe why you think that it is the case, and what measures can be undertaken to improve the robustness of the research?
27:10
Yes. So let me tell you the genesis of my initial paper that comes to this conclusion. So I served as editor of the Journal of Finance from 2006 through 2012. And that was a full time job. So I didn't teach during those years because I was handling a large number of papers and indeed, I made decisions on 7000 papers. So these are not easy reading, either. So this was just a huge investment of my time, but it was a worthwhile investment. My research did suffer somewhat, because I didn't have as much time to do my research. And after I finished with the Journal of Finance, I had this database of the 7000 papers. And I noticed that in my head, there were a number of submissions that had a similar title. And the similarity were the words and the cross section of expected returns. So basically, these papers were trying to find a factor, something that would lead to a return greater than just the market return. So think of a factor, or sometimes people call them anomalies is something where you can actually outperform the market by investing. So there's many of these factors that could value growth, profitability, momentum... I asked a PhD student at the time Yan Liu to go through the database and just do a simple search for these papers and I had this idea of just looking and cataloguing all of the things that were tried for explaining the cross section of expected returns. And it wasn't even a paper that I was thinking of publishing, I thought I would just have it as a so called working paper, put it online. And essentially, people would cite the paper, because it's a lot easier to cite my paper than to cite the 40 odd papers. That's what I thought about 40 things had been tried. So, Yan, I remember it like it was yesterday, he comes to my office and I said, well, did you go through the database and collect the names of these papers and the variables? And he said, yes. And then he said, we have a problem. And I'm thinking, well, I'm not sure what you mean here, but I'm thinking there's maybe 40 of these papers. And then he says, there's 120,
30:08
which is like triple what I thought. And I knew what he's talking about. The problem is the so called multiple testing problem. So we actually expanded the research at that point. And look not just at submissions to the Journal of Finance, but we looked at submissions to other top journals in finance and economics, and came up with a list of nearly 400 things that have been tried. And this is the intuition that if you try enough things, even if they're all random, even if they are just like simulated, something is going to work by chance. So if you try 20 variables to explain one variable, one will appear to work purely by chance. So you need to take that into account. And it's true for many, many applications in finance, and maybe the most prominent application is kind of manager fund performance. So you look at somebody that has beaten the market 10 years in a row, and you think that person's skilled, but that could easily happen purely by chance. And indeed, if there's about 10,000 managers in the world, nine of them will beat the market 10 years in a row, even if all 10,000 are just flipping coins, so nobody's skill. So you have a situation where nobody is skilled, you'll hardwire it, then you will see a performance like that. So my idea that was going to be an idea that wasn't published came a paper that has got ready a lot of citations right now. And that's called dot, dot, dot and the cross section of expected returns published in the review of financial studies in 2016. And that's exactly true. Would you say that we make the claim in this paper is with Yan Liu, we make the claim that over half of the empirical work in finance is likely faults. So that's a pretty strong claim. But I'm fairly confident that that's the case. And this is not just about the factor zoo, and all of these variables that have been proposed to so called beat the market. There's many studies in corporate finance that try to explain, for example, the cash holdings, a firm's capital structure, pail policy, all of these use dozens and dozens of variables that they're trying, and you find something that looks significant, you need to adjust for the possibility that it's just purely by chance. So what we haven't done in finance is a good job of controlling for this multiplicity. So we look at some variable as well as two standard deviations away from zero, therefore it's significant. We got 95% confidence, no, no, that's totally false. So you need to allow for the possibility of this just being purely a random effect. And when you try many things, it's essential to make corrections. So my research stream in the last four years has been focused on this idea that you need to make adjustments to the way that we usually do finance in order to have more confidence in the results. And if you don't do that, then you get this problem of the type one error. So type one errors, the false positive, you declare something significant when it isn't. And I think that over half of the research in my field is likely a type one error.
34:16
I suppose that authors of those papers that you found to be likely false positives, were not particularly thrilled about your insight and the claim itself. And kind of continuing the conversation about the research and doing it well, kind of doing it right, what is your take on the different incentives that industry has and academics have? Because essentially, for academics, what's really important is to publish in a top journal and then a secure tenure in the university. While in the industry, you are looking to make these observations with these researches and insights to bet money with a greater success. And so you have a real skin in the game. What's your take on this interaction and this incentive scheme?
35:13
Sure. It's a great question. But let me just go back to a remark you made about my colleagues not being too pleased about me claiming that half of the research are over half is false. It's interesting that I also picked on my own research. So I was making also the claim that over half of the things that I published, so I wasn't special in this in any different way. And you're right, that initially, they said, oh, well, this just can't be true. That's outrageous. And then, a year later, it was well, maybe there's some merit to what he's saying. And then a year later, well, that's no big deal what Campbell Harvey is saying, we already knew that.
36:07
So it's very interesting the way that this actually progressed. But I've also got a paper, my actually, my presidential address to the American Finance Association is published in the Journal of Finance in 2017, addresses the issue that you're talking about, and that is the different incentives between academia and business. So let me talk about the complex agency problem that we face in academia, and then how that is how it plays out in business. So the first thing is that there's competition amongst the journals. So you want to be the best journal. And the way that the best is defined often is by something that's known as an impact factor, and that's basically the number of scientific papers that are citing papers that are published in your journal, let's say the Journal Finance. So if the Journal of Finance gets a lot of citations from other journals, then that means it must be really good. So that's number one. Number two, papers that have positive results. So they support the hypothesis being tested, tend to have more citations. So think of it as you write a paper. And then you provide some empirical work that shows the effect doesn't work. Well, people not going to cite that. And this difference between the positive result and negative result leads to a selection issue. And over 90% of the papers published in finance and economics, have results that support the hypothesis and you would think it should be a lot less It's really hard to find stuff that actually works. In other fields like astrophysics, that ratio is more like 50%. And finance and economics are not the worst. The field that is the worst in terms of the percentage is psychology, where almost everything supports the hypothesis that's being tested. So given that the vast proportion is a positive result, then the researchers the faculty members actually figured this out. And they realise that they need to get a publication, they need to show a result that's significant. And this causes a lot of problems, because people will make choices to make strategic selections in order to get that significant result. So I call this p-hacking. And essentially it involves many different things. It could be a sample selection, it might be exclusion of certain periods, it might be certain rules on outliers. It might be an estimation method that's being tried. It might be a series of variables that are looked at where the best one is cherry picked. There are just so many leavers that you can exercise to get that significant result. And when you do this, you're just maximising the chance that the result is not going to stand the test of time that it will be a false result out of sample of so called type one error that I mentioned before. So that is an issue. It's a big issue. And it's really, really difficult to deal with a lot easier to deal with and other sciences, where you post before you do your experiment exactly what you're going to do, the tests the data, all of the rules are done beforehand. And finance and economics, the data is available, people look at the data before they actually do their research. Often the empirical work is done and then the theory is developed after, which is you should do it in the reverse. So essentially, you find a result and then you concoct a story that you call a theory to explain that result. So there's a lot of issues here. And this plays out negatively in a number of different ways. And one way is that practitioners will see the academic research, assume that it's true, and then just use it and then wrap a product around it and say, well, this paper published in the Journal of Finance had this idea we've implemented that, but they don't take into account that many of these ideas are false, because there was some data mining involved out of sample wasn't done properly, you've got choices that were made that make the result look better than it actually is. So there are incentives like that in academia, that kind of work to increase the number of type one errors in terms of the papers. So on the practitioner side, you're correct that the incentives are different. So you actually do not want to go to market with a product that has been overfed or p-hacked, because you sell out to your customers, then the performance will disappoint and that hurts your reputation. And it actually hurt your bottom line. Like it's just not get the funds coming in. So for practitioners, and I've learned this from practitioners, they need to be like really, really careful in terms of what they're doing, because they want to make sure that they deliver a product whose performance when you look at a backtest will repeat in the future. And that's really difficult to do. So it's really important to have the right incentives at an investment management firm. And let me just give you an example. Suppose that you've got two researchers of equal quality and a company. They both have master's degrees from good schools, they started at the same time, and they both pitch an idea, different ideas and management thinks, well, those ideas are worth pursuing. They're good ideas. So researcher one pursues the idea. They do the tests and and it fails once they do the under sample analysis, but the work was excellent in terms of the testing and the care that went into it. And then researcher two is also very careful in the research does the testing and it actually works, it goes into production and the company uses it. So it is a huge mistake to reward researcher two and punish researcher one. So both of them had equally as good ideas. They're very careful in what they did. And it turned out that one worked and the other didn't work. So if you punish researcher number one, then within the firm, people figure out what to do. I need to get a result that's significant or my bonuses cut, or I'm going to be let go and that encourages the same sort of behaviour, the same sort of p-hacking within a company. So companies need to be very aware of this also. And I've seen, you know, it's not hard to find and some of my presentations I've seen results from some firms are kind of obviously p-hacked. I remember early in my career, sitting with a very senior person from a one of the top investment banks, and they were with one of their junior colleagues, I was having lunch with them. They were trying to pick my brain over this really interesting model that they had where they're forecasting one month ahead stock returns. And the researcher showed me the model and include number of variables, but one kind of caught my eye. And I said, oh, explain to me what this variable is. IPt-15. And this was the change in industrial production lagged 15 months. And it's about, well, why would you lag industrial production 15 months, I can understand lagging it two or three months because of reporting delays. But why 15 and the person said, well, I tried everything through 14 and nothing worked. Only the 15th worked. And this is exactly the issue of multiple testing. So you try 20 different versions of that variable. One is going to work, the 15th worked, and it really didn't work, it worked in quotations. It's not going to work out the sample. There's no justification, there's no economic justification to put the 15th leg of the change in industrial production, in any prediction of regression. That just doesn't matter. any sense. So, again, this stuff does happen. The incentives are different. But I want to emphasise the issue of there's two issues. One, I've mentioned already, that I think practitioners are often too quick to embrace the academic research, and they don't realise the incentives and academia so that there could be some overfitting and the out of sample performance is not going to be as good. So I think that needs to be taken into account. Let me also make an important point. And that is that for researchers that are really thinking of long term impact, so and I put
46:51
myself in that category, we're really careful that we want to make sure that The paper we publish is not going to just create a buzz for a year or two, we wanted to actually have legs. So, again, it's it's difficult in academia, many schools, just count publications, and then many schools, just getting one publication as a top journal is good enough for tenure. And doesn't really matter what the paper says. So again, you've got this incentive to find something that appears to work and get it published. Whereas others are much more careful. They're playing for the long term, they're playing for the big idea. And, if it does work, then you will be recognised you'll have impact for many different years. So you're correct, that there's different incentives. I think that academics can learn from practitioners and I certainly have I also think that practitioners need to be much more careful in understanding the sort of incentives that academics have, so that they can do a better job. And the last thing I'll mention is that some practitioners engage in effectively, the same thing is what academics do. So if you're paid just based upon assets under management, rather than having any performance fee, it might be that it's no big deal. You're going to wrap some ETF around 300 different academic ideas, and 300 ETFs are launched. And you fully know that over half of them are false. That's not like a great equilibrium. I think that companies need to be much more careful. They've got a fiduciary duty to do the due diligence on these products that are based upon academic research. So you need to go back, recreate, do robustness in order to maximise the chance that it is a real phenomena. And it's got the maximum chance of not disappointing, the person that invests in it.
49:22
Absolutely. It's quite, I would say almost like a weakness of people how they, over the years evolved. Evolution made us fast thinkers about some problems that perhaps we should be spending a bit more time on and understanding the game theory aspect of publications and insight and aligning incentives. As you say an extremely important factor and getting that one right, gives you better chances of longevity for whatever you're pursuing, more precisely the investment world and asset management. And kind of trying to delve a bit deeper on your particular experience. So there has been many examples of academics leaving academia for jobs at hedge funds, or even starting their own. So you are a partner and senior advisor at Research Affiliates, a global leader in smart data and asset allocation and advisor to Man Group world's largest listed hedge funds. What are some of the experiences you personally picked up along the way while interacting with these two organisations, but more broadly been on the two sides of the equation, industry and academia?
50:49
Yeah, so I've been very fortunate in my career. I've been involved in many different kind of practical initiatives. As a relatively junior faculty, I was advisor to many firms. And now as you said, I'm an advisor to two, Research Affiliates and Man Group. I've been very fortunate because I learned so much. It is a really high integrity atmosphere at both of these companies, they absolutely want to do the right thing for their customers. They understand the reputation issue, and are playing for the long term. And it just fits so well in terms of what I want to do. So the sort of research obviously is more applied when you look at practitioner research, but nevertheless, I've learned a huge amount from both of these companies. So sometimes I am I actually feel bad that I go and visit them. And I come back, sometimes feeling that I've learned more than they've learned from me, yet I'm being paid for it. So it is a really, for me a great experience. And let me tell you, there's a couple of dimensions in terms of what I've learned. So I've learned things that the academics often make simplifications that are not really challenged by the academics or the peer review process are the editors of the journals, just much more difficult to actually implement these ideas. So if you think of these factors that are proposed, the factors such as value factor or growth factor here based upon long short portfolios and the stuff that's published, doesn't even have any transactions costs in it. And that's completely unrealistic. And then managers are benchmarked against these academic factors unfairly, because the academic factors don't take into account the transaction costs. And some of these transaction costs are so large to make the factor infeasible. So imagine the cost of shorting a smaller micro cap of stock. It's enormous, yet, that is just assumed to be a zero cost in the construction of these factors. I've learned also, it's been drilled to me about the importance of cross validation, and out of sample analysis, I was pleasantly surprised at both of these firms where the idea is proposed first, before you look at the data, and that's really important. So it's not like these firms have researchers just going through and analysing data with no kind of structure. No, you actually come up with an idea first, the idea is vetted. And then it's decided upon as to whether the idea is pursued. There's discipline that's imposed all the way along the the research process. So I've learned a lot from that. I've been very fortunate. And I think that people in my profession would greatly benefit from spending a day at a company learning what they do and the challenges that they face. The other thing that I didn't really expect, but I'm also sometimes on the road, talking to clients. And in what I do, I don't talk about particular product at either these firms but I talk about my own research and people are interested in hearing about that. For example, my paper in review of financial studies called Detecting Repeatable Performance addresses this issue of, you know, why doesn't past performance predict future performance and to show that it actually does if you take some steps to reduce the noise in past performance, and I've got a method of doing that, in the paper with also with Yan Liu? at Purdue University, and we actually provide something that we think is very useful for these asset owners that are thinking of investing in manager one versus manager two versus manager three. So I actually am going to talk to these investors actually learn the problems that they face. So this is like the other side. So there's one side is doing research to provide a product that somebody will buy. And then the other side is the buyer, and what are the issues that they actually face. So I learned from that too, and often they've got their own research teams that I can interact with. And again, learn more. And this is not just for my research. So I go into the classroom. And I can say, well, I talked to this pension plan, which is one of the largest in the world sat with the Chief Investment Officer and the CEO. And, and they said that these are the top three problems that they deal with. That's really valuable for my students, or well, I had lunch with the CEO of a major consultant and the CEO said x
57:01
And obviously x is really important for this consultant, and likely other consultants. So that's really valuable for my students. So this ability to interact not just with the internal researchers at these two very distinguished firms, but their clients. Wow, that is really, really valuable. And again, as a scientist, you always want to learn, and I just continue to learn, and this experience very valuable for me.
57:35
It sounds like there is, according to you, and I couldn't agree more that there is a lot of benefit in having this interaction between the researchers and different firms that are kind of all around the circle and that are servicing each other kind of the whole food chain. And if we had to just pick one, so say, fund managers, for them, their objective function is so much more complex and intricate, then what a simple and profit maximisation algorithms are doing because although you might have a longer term, better results, if you have a given model with greater volatility, but what can happen in the meantime, even if you have strong performance, there could be a bunch of redemptions. And if in the period of that crisis, you have a slightly weaker than what the average is in the industry, you can drastically suffer as a result. And so, having insight about some of these non trivial assumptions that go into their evaluation of models of strategies and as you said, like costs and so on, is valuable one to have and this should be channeled better, I think between the industry and academia. And so I would like to also kind of come back to what you previously mentioned about the COVID-19 and build up on this, essentially, it's no surprise and it's not controversial that it's a large exogenous shock that caught most of the market participants and economies offguard. And there was lots of discussion and lots of people stepped up to come up with their letter to describe the recovery. Some call it L, some call it W, what's your letter to describe it?
59:38
I posted a lot of stuff on LinkedIn. I've got a website on this. Indeed, seven weeks ago, I published one of the first forecasting models for COVID-19. reported cases and deaths, uses data from all over the world. It's updated every day, and includes not just country level data but province and state level data. I was frustrated that our policymakers were not showing their models. There was a model from Imperial College, there was a University of Washington model. But who knows what the CDC model is. And these models are not really that complex. And I fit a basic model to basically, the idea was to figure out what the inflection point was going to be. So the point where the number of new cases starts to decrease the point where the number of deaths start to decrease, and I thought those were really important for the psychology of this particular recession. So this recession is different than other recessions in that it's causes biological. And it's, I believe, also different from other recessions in that the solution is also biological. In the global financial crisis. It was slow moving train wreck, unemployment peaked, actually, after the Great Recession was over. So most people don't know that, that it kept on going up. And indeed, we didn't achieve the level of unemployment that we had before the Great Recession for nine years. So that was very long, uncertain. People really didn't know what was happening was almost like a like a lost decade. This one's different. It happened very quickly. The peak of the business cycle February 2020. And then we crash and we crash really hard we go to unemployment of 25%. I estimate within like a month, and 25% is like the level of the Great Depression, but I think it's highly misleading. In the Great Depression, you lost your job, you might not be able to get a job for for 10 years, there's just no opportunity and the great recession or the global financial crisis, you lost your job at Lehman Brothers, you're not going back there because they are out of business and hard to get a job at a similar firm. And again, you could be unemployed for an extended period of time. This recession, there is a different word, furlough, so many people have been furloughed. They've been told, well, hopefully, you can come back in two months, maybe before maybe a little later. So there's the expectation that you're going to go back to work. In the global financial crisis, the firm's that essentially caused the crisis are banks. They were doing a very poor job at risk management. So they're extremely levered. They're operating like hedge funds, a financial event occurred that wasn't really that consequential, but it put them over The Edge jeopardised the whole economy. And then we had to bail them out. This time around is completely different. The firms that are affected, were not offside. They weren't doing anything bad. They were high quality firms, and they get hit with a natural disaster, you can think of it that way. So there is this possibility that they can essentially come out of stasis, and hire back the people that they've furloughed. And this allows for the possibility of a fairly quick recovery. Indeed, if you think about it, it's pretty clear what the biological solutions are, number one, a pharmacological solution that reduces the fatality rate. So it doesn't prevent the COVID-19 but it just reduces the fatality rate a number of things are in process in trial to actually help mitigate the symptoms. And then of course, number two is the vaccine. And I just don't believe this talk about, oh, it's gonna take 18 months for a vaccine. It's really hard to do. Or I hear well, empirically, only 19% of vaccines that go into stage one trials succeed. Well actually believe the 19%. So that's just a fact, based upon the past, but there's over 100 initiatives on the vaccine front. So there's going to be multiple vaccines that will be available, we're doing things differently, instead of waiting for stage one, clinical trial two and then begin stage two, then begin stage three, those are being compressed. So I fully expect that we'll have a viable vaccine available in the fourth quarter and widely deployed in the first quarter and what that means is the uncertainty is effectively greatly diminished. So again, in the global financial crisis, the Great Recession, the companies didn't want to make capital investment, they didn't want to hire new employees, consumers didn't want to spend, because they just didn't know how long that's gonna last. Whereas this one, there is light at the end of the tunnel, that the biological solutions widely expected. It's going to happen when we have a vaccine. It's essentially all clear back to business as usual. And I think that again, it's a pretty short runway, that what I've basically forecast is kind of like a skinny U shape for the second quarter is very bad. Third quarter is a little better. The fourth quarter, there'll be substantial growth and more substantial growth in the first quarter of 21. We won't get exactly back to where we were because there is some structural damage. But the key is to minimise that structural damage. And by structural damage, I mean, high quality firms going bankrupt. So we're seeing a lot of bankruptcies. But these firms, we all know, it was just a matter of time for them. So Hertz is the latest one, but we knew that they had been struggling for years. So this crisis has effectively accelerated their problems. So I do think that this is a different type of recession. I actually call it a great compression, where you get all this bad news very quickly, but you also get good news very quickly. But again, it's interesting in terms of financial economics, a number of people have stepped up to actually develop models that could inform policymakers in terms of the strategic reducing the lockdown measures and opening up the economy. So, so I think that this is important, obviously, in investment finance. Usually these recessions are financially oriented or some economic cause. This one's biological. So people need to step up and understand the science behind us. And the conversation has changed, I think dramatically in terms of what's going on. People are talking about instead of ICU beds and and respirators, they're talking about vaccines, and then pharmacological solutions and when to open up or than locked down. So I do see that things will improve in fairly short order.
1:07:46
I totally agree that the sentiment is gradually changing and there are already some positive developments. Obviously, they're in many cases contrast it was bad news, bad results, but that still is a moving positive direction compared to even a few months ago,
1:08:03
and
1:08:04
I would like to kind of wrap this episode by asking you about the three pieces of advice you would give to your younger self.
1:08:14
well, that's a great question to end off on. And maybe I won't do three, but I want to kind of link this to something that I talked about earlier. And that is kind of searching for the big idea. And there was a point in my career where I realised that oh, well, this publishing stuff is not that difficult. So I was able to publish some papers that when I look back on them, they probably shouldn't have been pursued. The ideas for small ideas, they were good enough to get into a top journal, but not really the big ideas.
1:09:01
So I think that if I was kind of going back in time, and this is advice that I give to my students, that one of the most important things, and I think I've developed this over the years, is not just generating the ideas, you'll have ideas, but it's kind of an asset allocation problem. You need to allocate to the ideas that have the highest expected return. And that takes a discipline to actually do because you've got an idea, oh, well, I can rent this up and maybe publish it in a one of the top journals. And you need to, to basically say, no. I had a paper that I remember, I thought it was a pretty good paper and I had a positive review at one of the top journals. And then I started I got access to a new database of emerging market stock returns. So it's the first database ever put together was done by the International Finance Corporation of the World Bank. And I was brought in early before it was released. And I realised that this was a dream data set, that I could write some very important papers with that data set. There are outstanding, important economic questions that were there that could be addressed with these data. And I just dropped everything. So I dropped that paper that had a good review that it could have been published in the top journal. And I just focused on these insights that I thought were important, not just for publishing but for emerging markets. Also, I thought this was a good thing for emerging markets in general. It would help them in terms of reducing their cost of capital, their risk and attract new investment. So you need to have this discipline in terms of allocating your time to the highest possible payoff projects. And I think that I've done an okay job at that. But I think that if I knew what I knew today that I would have been more focused, and you mentioned 125 publications, well, maybe that should have been 75 publications than 125. So I could be much more focused on this. And I think, another piece of advice that's really important, and I didn't realise that at the time, that you become very specialised, and this is true in almost all of science, that you start working on something and then you read the papers within that sub area of your field. And don't really understand what's going on in other areas of your field. And that leads to a type of myopia. And I was really fortunate to be co-editor of the Review of Financial Studies for six years and then editor of the Journal of Finance where I had to read stuff outside of my area. So I think that's really important to not just focus on your narrow area, but to be informed in terms of the problems outside of your area. So you need to be broad within your actual field. And the third thing is, I think, also really important that, remember I said that I didn't take a business course when I was doing my undergrad. I figured I could do that later. So I think it's really important for young people to have a broad education. So that doesn't mean just doing like one thing like psychology, it means having a broad slice of what's going on. And that could be mathematics. It could be physics, English history, political science, some economics, but get this broad foundation that could be useful for you later. And this also is the same thing in academia. In preparing for my presidential address to the American Finance Association. I took basically two years where I spent a substantial amount of time looking at fields outside of finance and economics and how they did research in the topic of my address was essentially, it's called the scientific outlook and financial economics. I wanted to to actually look at the outlook in other fields, how they did research, and what we could learn from other fields that could be applied in financial economics. So my third piece of advice is, is to be broad. So to be curious outside of your area, just don't be too narrow, because when you're narrow, you won't be able to generate the same type of big ideas. We can learn a lot from other fields. And yes, it's true, it will take time away. And maybe it's going to cost you a research paper here or there. But it's likely going to cost you a research paper that's a small idea. And what I'm talking about is hunting for that big idea. And often you can be inspired by research outside of your field,
1:14:51
indeed, constantly striving to evolve, adapt, become better, and pursuing big ideas and sort of being long term goals. Using the quote from a famous Goldman Sachs partner, Gus Levy, is is the way to go. I would like to conclude this episode of speaking to legends. And thank you, Professor Campbell Harvey for doing this. It's been extremely inspirational and informative. I thoroughly enjoyed it and I hope our listeners also find it useful. Thank you very much.
1:15:26
Thank you for listening to this episode of Speaking to Legends. I hope you found it to be useful and thought provoking. If you enjoyed this show please write a review on iTunes to support us. Stay tuned.
Transcribed by https://otter.ai