Phase Space Invaders (ψ)

Episode 18 - Erik Lindahl: Finding simple and novel ideas, starting an experimental lab, and ligand-gated ion channels

Miłosz Wieczór Season 3 Episode 18

Send us a text

In Episode 18, Erik Lindahl reminds us that despite our dependence on computational power and advanced technology, real breakthroughs are often waiting for those who have the patience to think carefully, come up with eye-opening ideas, and follow their sense of purpose. We discuss the different ways to be smart in science, highlighting the paradoxical need for both complexity and simplicity in thinking, and talk about what kind of questions in biology will keep us all busy for decades to come. Finally, Erik shares the story behind his series of lectures on concepts in molecular biophysics, a great component of the curriculum of every scientist in the field.

Milosz:

Welcome to the phase space invaders podcast. I'm happy you're with us for another episode. It's time for conversation number 18. And my today's guest is Erik Lindahl, professor of biophysics at the university of Stockholm and KTH Royal Institute of technology and SciLifeLab. Many of you will know Erik as the coordinator of the Gromacs project. But this is far from all has been doing these days. In fact, his group not only develops and applies computational methods to study pentameric ligand-gated ion channels. But also has an experimental branch dedicated to structural biology, which means they can approach biological questions in the field of neuronal signal transmission and neurotransmitter function across many techniques and scales. And so accordingly Eric defies standard mindset of a computational scientist. But reminding us that despite our dependence on computational power and advanced technology, real breakthroughs are often waiting for these who have the patience to think carefully, come up with eyeopening ideas and follow their sense of purpose. We discussed the different ways to be smart in science, highlighting the paradoxical need for both complexity and simplicity and thinking. And talk about what kind of questions in biology will keep us all busy for decades to come. Finally, Erik shares a story behind his series of lectures on concepts in molecular biophysics, one I absolutely recommend if you haven't watched it yet. And they're all available on Erik's youtube channel. Hope you enjoy our conversation. So Eric Lindahl, welcome to the podcast.

Erik Lindahl:

Good morning, Miłosz, great to be here.

Milosz:

So Eric, I believe by now your name universally evokes an association with the GROMACS software, you're also deeply involved in regular research, primarily in the field of membrane receptors and channels. And with these two sources of practical insight, I wonder how you see the future of the cycle of discovery in molecular biophysics. Will it be mostly driven by new experimental data or by a new software or hardware, or maybe we're just still mostly bound by, you know, human imagination.

Erik Lindahl:

Well, that's a good question. At first, in the defense, not just of my team, but there are many groups all over the world doing tremendous software works. I think that software work today is a normal part of main biophysics research, but Overall, I feel that I think the older I got and then the particular some 15 years ago when we started doing our own experimental work, I realized that we frequently care a lot about techniques, methods and everything, which certainly is important, but the really hard part here is focusing on solving fundamental biological problems and using both computers, but also a number of experimental techniques to make it faster And I think biophysics as a field, not just the computational part has made tremendous advances there. We've seen, free electron lasers, cryo electron microscopy, amazing super resolution microscopy. So this whole traditional, uh, Division between on the one hand experimental work on the other hand computational work that existed some 30 years ago when I started the field has completely disappeared. biophysics today is a data focused research ranging from original collection of data to very advanced processing and, data focused work where we learn from it. Which is fantastic. It's a great field to be right in the middle of and trying to keep up with much younger people.

Milosz:

I see. So would you say we are ready to incorporate all those streams of data, all those experimental techniques into our simulation techniques?

Erik Lindahl:

I'm not sure whether that should be the goal, right? That, as I mentioned originally, that there are many things that are amazing with all the computational power we have today that you can throw more data at an algorithm than we could even imagine some 20 years ago. But I feel that we frequently forget this part that the most important part about science is thinking, thinking about. A problem that is actually not yet solved. We literally we do not know the answer And then finding ways to solve that problem now In some cases, data driven methods might be a good way to do that or simulations, but in a surprising number of methods that might be an, uh, experimental approach or simply sitting down with paper and pen and thinking, I think we spend too little time thinking and too much time occasionally using computers as the brain prosthesis.

Milosz:

So you would say we're on the imagination side where the effort is going to lie, right?

Erik Lindahl:

I think that's. That it's, it's been that way for 40, 50 years in biophysics, and I think it's going to remain that way for 50 years. having smart ideas is always the bottleneck, which I think is great and refreshing in so many ways, right? Because those great ideas does not require you to have access to say 1000 GPUs or a whole lot of these fancy resources that we might have at some fairly rich institutions No matter where you are in the world, um, the human capital is fairly evenly divided and, uh, you can have a great idea, even if you don't have access to tremendous and very expensive resources.

Milosz:

okay, that's, that's an interesting perspective. I haven't thought about this kind of. versus time dependent, research agenda, right? Like how much of your ideas can be thought of at any time in history versus how much you're actually, contingent on the current moment in, in the history of research. So the word vision, Is, is this something you would propose as something that propels research?

Erik Lindahl:

I'm not sure. I think it's a good, uh, Helmut Schmidt, old German chancellor, used, there is a saying, it's probably not true, but he used to say that anybody who has visions should see a psychiatrist visions can be good. It's, I think it's good to have an idea where you're heading. Grand visions, I think it's a good idea to have a personal vision. Why are you doing what you're doing? What is the reason why you're doing it? Are you trying to understand protein structure or understanding the nervous system? There are many problems that are great. Have a reason why you're doing it the mere fact that something is faster or slightly better is usually not a good idea. But why should it be faster? Why should it be better? How will this help us, the world and the future generations to understand biophysics and biology better in some way that might be a vision, but, then on the other hand, there are some great method development team and they see their vision is to enable others to solve these problems. And I think that's equally valid. if I say somebody is training a machine learning method and I know I'm doing this precisely because I want to allow people to say, classify cat pictures on the internet better. Now, if that's your goal, more power to you. But just be sure why you're doing something and make sure that you're doing something you're enjoying

Milosz:

okay, so that's a of voices approach. finding their own vocation or calling and then following it. What is yours? What is your vision then? what keeps you up at night? You know, in terms of problems in biology that you want to solve in the next years.

Erik Lindahl:

Yeah, in one way, I'm not sure whether I'm an exception to that, but I've always had two mistresses in life, and these mistresses, at least in the professional life, one of them is computers and the other one is biology. And I've, the hardest decision of my life was deciding whether I should go into physics or medical school. And I thought I made that decision some, uh, in the mid nineties when I went into physics. The only problem is that biology didn't agree and this has kept pulling me back for the last quarter of a century so more or less by mistake i've ended up in this situation where i'm on the one hand I'm working on life science problems, but we're very much doing it with physics and particularly computer based solutions Over the last 20 years or so, I've fallen in love with the ligand gated ion channels, in particular, the ion channels in the nervous system and understanding, you know, neuronal transmission, for two reasons, because on the one end, it's a very physical problem. You're literally conducting a signal in our body, right? There are a few things that are more physical than that. But on the other hand, it's also an inherently chemical Chemical problem now if you're listening to this podcast on a Friday, you might go out and have a glass of wine and those ethanol molecules are directly modulating, the conductance across the so called synaptic cleft. So that we have a multitude of ways where both the environment or drugs and anything can fine tune the signaling. And I, I simply find it amazing how nature has been able to evolve and optimize this over. Billions of years to create both very highly selective signaling sometimes protect ourselves from the signaling you might have heard there are these, for instance, these toads that are highly poisonous that people use the Indians in South America would take The arrows and, use these toads to add poison to their arrows. The whole point, why doesn't the toad die from this? Because this is a poison that then would block these channels. That turns out that there are a handful of mutations in the binding site that make sure that this particular ligand gated ion channel in the toad is not sensitive to the poison produced by the toad. And this course, we all know why this works. It's natural selection, right? But the fact that we managed to combine physics, structure, selectivity, uh, I know how it works, but But it also on an emotional level, I'm still that it actually does and that keeps paying back. And I, I hesitated a lot when we started to do experiments in the team, but in hindsight, it's probably the best decision ever because some but 15 years ago, I felt that that led to an entire second phase of this love that we can producing our own data and we can start to study the problems directly rather than just relying on data produced by others. So that I still identify myself as a computational group, but we're also very much an experimental group nowadays. So that We love all these techniques, no matter what they do. the only, the only thing that's relevant to us is that is this is a technique that can help us understand these ligand gated ion channels. That could be SANS, it could be cryo-EM, X-ray crystallography, you name it and if it's a technique that doesn't do that, it doesn't really help if it's the world's best technique. If it's not helping me solve my problem

Milosz:

right. As a question, how hard it is for, someone with an established computational career to start an experimental lab as a separate

Erik Lindahl:

Well, I, I, I had done experience at nine graduates, not a PhD student. I think anything in life is difficult. But you need to recruit. People who can help you do this. And, we, I have a tremendous team, here in Stockholm where Rebecca Howard leads the, uh, the experimental team. we have fantastic postdocs and the PhD students. No one mentioned the non forgotten, but I don't think that's fundamentally different from computational work. Nobody in the world will be an expert in every single computational method, and you likely can recruit people who have expertise in various computational methods in the team. Good teams are usually teams. Where you allow people to have different expertise. And I think our role as PIs, team leads, no matter what you call us, is to really to explore this, make sure that we recruit people with diverse experiences and get them working efficiently together as a team

Milosz:

right, because I think a lot of people will hesitate to do that, right, with the load of work that we usually have to accept a bunch of new knowledge internalize and new people to work with in a field that you don't really know that well. Right. it is a big bet that I guess it pays off in the long term, right? But in the moment, it might be a really daunting challenge.

Erik Lindahl:

I know that I've thought that too. Um, in hindsight, I'm not sure where we're thinking the right way. cause I, nowadays my team is big. So do as I say, not as I do, et cetera. But I think many of us have, as we've developed large teams, we've become Ocean liners that get very difficult to turn around a set in a new direction, if nothing else, because you might have 5 students in a pipeline. If we think of ourselves more as individual scientists, previously in our careers, we've usually been pretty good at approaching different problems, learning things at university. There are a few, I think there are a few students on the master level who find it difficult to learn new things. That's why we are in university. And that's just as a new computational method is difficult. cryo-EM is difficult. but of course, every single method, new methods, when they appear, they are new by definition. Nobody is an expert on this technique yet. Possibly apart from the person who has developed the technique. And I think we need to get rid of this idea that we should not do things that are new or difficult. Those are exactly the things we should do. Is that, now the question is, how do you find the motivation there? And particularly those first few years when it's difficult and we don't consider ourselves experts. That's where I think for me, at least it helps the focusing of this. Why am I using it? we didn't start doing cryo-EM because we were the best team. team in the world of doing cryo EM, pretty much the opposite. But of course we had a need for cryo EM. It was potentially a revolutionary technique that could help us determine different states of these channels we were interested in, even if it didn't have crystals from them. Now that's exceptionally valuable and that value is likely worth spending a few years, even though we weren't structural biology experts at the time. Well, maybe Reba, but not the rest of us. And of course, after a few years. techniques, you're less lucky, some techniques, you're more lucky. And at some point we all choose to go after the directions where we have more luck or where the results look better. and at least in my team, I think that's every single new technique. We picked up that's how we've done it. There are some techniques we've abandoned not because they're bad or not because we were bad at it, but They didn't pay off quite as much as something else so that there is this balance of dare to explore the unknown But at some point realize when it's time to not necessarily give up But also go with your winners, but we should do more new things.

Milosz:

absolutely, I think this is a consistent message that's, sounded on this podcast. Yeah. So that's also, bleeds into another passion of yours famously, which is data, right? You mentioned that, you know, we have lots of sources of data. But, how much we can trust these sources of data is a, is a big, big question. So I understand that having your own source controlling the uncertainties and pipelines is a big advantage. But do we also have ways of addressing this on a systematic level, like, working with databases, data, metadata, and so on?

Erik Lindahl:

I think that's, critical. The first, I wouldn't say that we can't control things by producing our own data, but it's probably rather that we produce our own uncertainties. I'm not sure whether that is better than somebody else's uncertainties, but doing experiments yourself certainly gives you another type of respect for experimentalists. I'm not sure it's necessarily given me more respect for experiments. Experiments because experiments is just so noisy and complicated, occasionally ugly as computational methods can be. They're just ugly in different ways. The challenge with data is always quality, right? what data can we trust? And, um I think as a community, we are improving. I think in general, the computational community is Mostly ahead of experimental ones. The exception in the experimental communities are likely historically this data sets that have been very expensive or difficult to produce starting with x ray, genome sequencing. We're seeing with now that we're really good at assembling both raw images and the actual electron densities as well as the structures and the PDB. Uh, if you look at small angle neutron scattering, they have these fantastic databases, where they. If we go down to one of these facilities to record a spectrum, it's not just that the spectrum are put in databases. So if my student and lab manager go down there for recording, it's really cool. You get a DOI for the experiment. So this DOI includes the student and the lab manager from my team who went down, it includes the beamline scientists. It does not include me. And that's, that might sound strange, but at that point, I was not physically there to conduct the experiment. So this DOI is specifically for the experimental data. And I think it makes a ton of sense, and it gets even cooler when you realize that you can have up to six months embargo. But then automatically, everything is available after six months, no matter whether we have published or not.

Milosz:

The why, by the way, can you explain the abbreviation

Erik Lindahl:

Oh, sorry, a digital object identifier. So it's the idea so that you can find it in a database. And I.

Milosz:

Right.

Erik Lindahl:

this of course counts, it is a publication, you can track the data, you can assemble this data in many ways in life science, I don't think we're quite there yet that we can force all cryo-EM experiments and everything, to be public after six months, whether you published or not, but we're also taxpayers, right? And as a taxpayer, if we are paying for these facilities, we paying for somebody to collect their data. I'm paying for this data to be collected. I'm not necessarily paying for somebody to limit access to the data to benefit their career. And this is of course a cultural tectonic shift. I think it is happening and it's, but it's likely going to be after the next generation to drive this even more. Now, if we translate, this to say molecular dynamics or anything, should we collect molecular dynamics data? Had you asked me 10 years ago, I would have sighed and say no, because there is so much crap data out there. Yeah. And I think that's still true, there is a lot of crap data out there. Now, to be honest, there is a lot of crap experimental data out there too. the cool thing in experiments, we have learned to handle this Not by disallowing people to deposit some data, but being better at quality indicators. So if you have a PDB structure, anybody can upload a structure to the PDB, but of course there will have to be quality, assessment reports and everything. So if I go and check these structures, I will see that that particular structure from Eric Lindahl is crap. And then I will actually avoid relying on it. I think the key To MD, AI, whatever data we imagine is that we need to get better at developing quality assessment indicators. So that out of these structures, I know what can I trust? How much can I trust it? And I hope that that could have tremendously important ways to actually. Not just short term to be as a consumer decide what structures I used, but long term, we're all also producers, right? And if I get these quality assessment reports, when I upload data, it will hopefully start to drive quality improvements in my team, that we get better at producing data that fulfills high levels of quality so what can we do if all this data is available? Well, today, the obvious consumer here is of course, various machine learning methods, right? In general, I would probably prefer to train things on experimental data when I can. The challenge is that producing new experimental structures, and particularly any type with motion resolved or so, is filthy expensive. While molecular dynamics simulations, if we, for an eye, for a second, trust the experiment slightly more, even if the molecular dynamics simulations or AI models are not quite as accurate, they are probably a factor thousand cheaper. So, At least where AI is heading today, having access to factor 1000 more data is probably in some cases more important than having the highest possible quality data.

Milosz:

Right. That's, these are great points. I'm just sometimes thinking, what is the extra information that lies in the dynamics, right? Because a lot of simulations, equilibrium simulations in particular, will not really explore, much more than just fluctuations around the ground state around the native structure, right? So how do we even combine things like enhanced sampling and, and D and simulations sample multiple conformations, especially when they don't keep track of the underlying physics in a strict way, like with free energies, right? I'm a bit allergic to the

Erik Lindahl:

word dynamics per se, right? Because I think you're quite right. The dynamics per se does not add much if anybody has read chapter one, the GROMACS manual, I think we all said that, you know, we're not doing molecular dynamics and Newton's equations of motion because it's some sort of super fancy dynamics per se, it's just a really efficient way for all atom systems in particular to sample the entire phase space. And of course, if you know the, if you know the entire energy landscape of a molecule, we know everything that we need to know about the molecule, and then we can derive the dynamics from there. I don't think there is any, at least not. A lot of inherent value of simulating the exact dynamic during the simulation, because remember that we don't, we do not know the exact initial conditions anyway. so that dynamics is just one, well, one or a handful of examples of potential trajectories. This is potentially also what makes AI methods so interesting that it's not the, it's not the dynamics per se we're getting from molecular dynamics. There are efficient ways of exploring the phase space and although if AI methods might not quite be there yet, at least not for large molecules, I'm quite confident that they will be able to sample phase space quite efficiently long term. Uh, and if they can do that, uh, More efficiently than what we're doing with current methods, they're going to be just as efficient to determining the entire energy landscape, and then they're just as good for understanding the dynamics

Milosz:

right. that brings us back to the question of receptors, channels. Is your perspective that, you know, we understand them in principle and it's now just a question of engineering and data to collect all the interesting biological facts about them? is there something that you would inherently say we don't still or grasp about how those molecules work?

Erik Lindahl:

Well, first, there are, there are numerous families of receptors, right? and there are a bunch of them that are, well, not completely unknown, but mostly unknown. So they're going to be, they're going to be work remaining here for generations of scientists to come. I think simulation groups in particular, or maybe My generation, in particular, we tended to pick things where the structure was already known, you know, with AlphaFold and other things available today that we don't necessarily have to start with things where we already have a PDB structures. It's much easier today to pick a white spot on the map and start using computational methods to explore something where we don't even know the structure yet. But if we limit ourselves to my receptors, the ligand gated ion channels. You could argue that on a very fundamental level, we understand what happens. You have a, it's a membrane protein where the channel goes through the membrane. There is a domain outside the membrane, an extracellular domain that binds the ligand, the neurotransmitters, and when these bind, the receptor undergoes some sort of protein earthquake and the channel pours some 50 angstroms away, magically opens. And that we know but the devil in the detail here is that how does it open? Biologically, all these receptors, they only open transiently, which makes a lot of biological sense because that producing the iron imbalance, which we do with the ATP pump is one of the most expensive processes in, uh, human cells. That's why, and that's why bacteria, for instance, don't have a nervous system. But Once you're opening the channel to create these signals, the problem is the ligand is bound, right? And the ligand is going to be, stay bound. That's the whole definition of bound, it likes to be there. You don't want these channels to stay open for a second and keep leaking ions. It would be very expensive, but it will also mean that we couldn't really create new signal in the nervous system. So these sensors, so they undergo a process called desensitization, that in like in a millisecond or so they collapse. And Stop conducting where just the last 2, 3 years, we're starting to get the point where we're understanding these process and we still don't understand the process. How do we go from desensitization back to the resting state? How do we recycle so that we can go through an entire cycle of these receptors from resting to transiently activated to desensitized and then charge back and to us? Well, what's tempting with this is all the, all the various drugs by which we can modulate this process. And even more so that this is difficult in our lab when we're working with simple model channels. Our simple model channels are so called hetero pentamers that these I didn't tell you that these channels consist of 5 subunits. In many cases, we had other groups in particular, the computational side work with 5 identical subunits, but biologically, most of the biologically interesting receptors are heteropentamers that consists of at least two, if most commonly three and sometimes four different subunits out of 19 genes. And this creates a remarkable diversity of cooperativity or independent motions and everything that of course have very important natural selection effects. This is likely what gives rise to the entire diversity and signaling our brains and everything. And we understand almost nothing. About it's going to be decades of work remaining just for this single family of receptor. And that's, that's a great with biology. There are hundreds of them remaining after that one. It's not going to end with my life.

Milosz:

I see that's amazing. I think, lot of the answers on the complexity side, have the form of, oh, here's the data that answers your 100 particular questions, right? But do you even synthesize an insight like this? Like what you say about the hetero pentameric channels, is there an abstraction that you can make the data and hold mind as an answer to, oh, what is the role of hetero Ligand gated channels,

Erik Lindahl:

So, uh,

Milosz:

right there.

Erik Lindahl:

I think computations serve one important tool there. There are things we can do with computations that are very difficult to do in experiments, in particular accessing these transient states. What's actually happening during opening and, uh, collapse processes and how are the different subunits behaving. And remember that these are transient states that are very difficult to capture experimentally. And if we can't capture them experimentally, it's very difficult to the structure. Now, having said that Simulations certainly they have the limited accuracy. Um, many of these channels are remarkably sensitive to the lipid composition of the membranes. The human ones don't really work efficiently without cholesterol, for instance. So, To us, one of the most powerful approaches has been to combine simulations with so called electrophysiology experiments, which sounds complicated, but it's not really so that mean that we take DNA from a particular receptor and then we inject a few nano liters of this and a frog eggs and opus labius oocytes, the largest egg cells that exist. So there are a millimeter across. And then we put them in a. Fridge for three days or incubator, as you say in the lab. And then a few days later, now 99 percent of all the membrane proteins on the surface of this small cell is going to be our particular ligand gated ion channel. Now I can put two glass pipettes on this and literally measure. How much current is going through this membrane when I'm subjecting the channel to different conditions, both voltages and external molecules. And these types of experiments to me, they're, they're the extreme opposite of simulations, right? I have no atomic detail whatsoever. I can't see anything about what happens. structurally, the one thing I can see though is that we get absolute answers what's happening biologically. Is it increasing conductance or decreasing conductance? If I now try this with different subunit compositions, is the process happening slower or faster, more or less? And if you then, that gives you an ability to Use simulations of Gedanken experiment that we can have models ideas, but then, of course, it's imperative that we actually assess this and compare it to the experimental results. If I believe that this simulation model should result in something, is this actually what I see in an experiment? And I, I have a great example of the, we fooled ourselves many years ago, and we never ever published this. And that's why it can be cool to mention on a podcast that. When we get started with the experiments, I had this passion that we would like to show that we could do something in a simulation and then show that we could predict the experiment and that fairly simple changes to a channel that should be easy to do in a simulation, right? So these channels are fairly hydro phobic on the inside. Uh, so we figured that if we make this pore now more hydrophilic by replacing some threonines, uh, no, sorry, uh, I think it was leucines with the serines or threonines, then I make the entire pore more hydrophilic. we did that in the simulation and the pore is definitely more hydrophilic. The pore is larger. You see a much larger radius. And then you go into the lab and expect to confirm this, that there should be open easier and have higher conductance. And what you see in the lab is the extreme opposite. It has almost no conductance at all it is significantly harder to open the channel. And then you start wondering what went wrong. And I realized with all these serines or threonines, what we have created was effectively an ion binding site. So that what happens is that the first ion that goes through the pore, instead of passing through the pore, it gets stuck in the pore and binds there. And this is actually known experimentally for divalent ions. And we had just created the similar site for a monovalent ion, which of course, it gets these. Memento more is that in a simulation, you think that you can measure conductance by the radius of the channel, but what you're actually measuring in the experiment is the number of ions per unit of time that go through. And if you try to measure the number of ions per unit of time that go through, you can do that with computational electrophysiology, right? The result matches perfectly. So I think it is important to compare simulations and experiments, but, there's this analogy with René Magritte, right, Ceci n'est pas une pipe., The simulation is a model. It's not the reality. and don't confuse your model with the reality.

Milosz:

Absolutely. even with that, I think it eventually makes you humble at some point, knowing that, you come up with a simple explanation and then the simulation or the experiment tells you that, oh, there's actually a very obvious alternative that you didn't consider, right? That there's a evolutionary balance of forces, and you didn't think of that in the first moment, being fixated on a different flavor of the model.

Erik Lindahl:

Yeah. And I,

Milosz:

have

Erik Lindahl:

and I think,

Milosz:

experiences that.

Erik Lindahl:

and I, I think, I, I think I can say this as a computational person. if I wear my computational hat now that, some of the smartest people I know are computational researchers and I'm so in shock and all about Klaus Schulten when he was still around Peter Kollman, Benoit Roux others and yet the danger with us being computational, we're smart. We're smart at the physics. We know the math in and out. We know all the details about the methods. Many of these computational person, we could actually derive the Bloch equations for NMR or so. While if I look at some of the smartest experimental colleagues I have, or rather the experimental colleagues I mostly admire, like Gunnar von Heine here in Stockholm there, they frequently go for simplicity because they are so They're so deeply aware about how insanely complicated the systems they're studying is. So the only way to understand this is to ignore all the details, go down to very simple questions, questions that you can ask in a lab and get yes or no answers and I think 25 years ago, the point of that is, of course, not simple at all. the hardest thing in life is learning to identify things to drill complicated phenomena down so that they can be answered with very simple questions. And that's probably also an important lesson. computational people like us, we are, we are smart is probably not the right word. We're good at math. We're good at physics. Don't confuse that with intelligence. intelligence is frequently the ability to being able to ignore all the details and just come up with one very, very smart experiments. That will answer the same question without the need for all those equations

Milosz:

bottom line being simplicity is hard.

Erik Lindahl:

Simplicity is hard, but it's worth to strive for.

Milosz:

Great. And you strive for it with, I always like to promote, good resources for the community. So I just wanted to bring up your lectures that are available. are they available? Can you point people to

Erik Lindahl:

Oh, that, that's a you, that's on YouTube. I don't even know the link, but if you search for my name on YouTube, you would likely find them. I'm actually, I'm very happy that they're useful, that this was a Side effect of the COVID pandemic so that with four weeks notice we were I've for a long time I used to record my lectures just with a built in camera or a small camera on a tripod, but it's such a horrible experience because it's like it's a B, C or D class experiments compared to actually being in the room right and you're seeing. The best case, you're seeing the teacher's nostrils, uh, and the worst case, you're seeing the back of the teacher when I'm writing on the board. But during COVID, we were all told with three, four weeks that now you would have to have complete remote teaching and I had seen Adrian Roitberg at the time, giving a talk a few months earlier. I think I was using one of these light boards at the University of Florida and what can I say? I'm a tech geek. So I figured that. If I've, I've for a long time, I've thought about doing a proper studio version of this recording, and I tried to get my local university studio interested or something. They didn't have time. So at some point, I just bought a bunch of equipment over Christmas and I probably spent way too many nights. It's 2 or 3 am in the morning here in the lab. The great thing with COVID is that the. All the rooms here were empty, so I could steal a lecture room here Solna and I set up a studio and I had it working for four weeks and I'd recorded 298 of these small talklets. They're roughly, I split all my lectures into three to five minute concepts where I go through all of them. and this was for my students at KTH, but I figured that if I anyway did the work 20 years ago, I would likely focus on all the imperfections and There are a bunch of mistakes in these lectures, but the point is the important thing is not the mistakes, right? The important are all the things that are not the mistakes. And hopefully there are fewer mistakes than things that are right in them at least. And at some point I decided to flip the switch and make this available to anybody. And that's the cool thing. I think there are more students that I watch these online the last three years that I've taught in person the rest of my career at KTH, which is a bit humbling. But I'm happy that it's useful.

Milosz:

Yeah, I think it's an amazing resource and it's great to have a repository of, you know, questions, problems in the fields that back potentially decades. People have the reference to know what is important, what is not. And, yeah, I'm always happy to promote, again, kind of unifying pictures of the fields that bring knowledge to the younger generation.

Erik Lindahl:

But, I can just say that follow up, don't think of yourselves as younger generation. If you are a CA PhD student listening to this, the fact that in particular the computational side of our field depends so much on computers, that is, um, Difficult for those of us who are older, uh, it leads to remarkably fast pace so that most of the things on the computation side, at least that is more than 10 years old. Isn't really that relevant so that all of us had to keep learning. and I guess this podcast is another example of sharing things, right? That even if. You are fairly junior. Sure. You might not have my or somebody else's perspective, but you're likely more on top of say novel techniques or the latest thing that you're working with. So one way to contribute to this, the find ways to sharing whatever you're working on right now as a student, you don't have to wait 10 or 20 years to do that. And I would likely, I frequently look up resources online and I couldn't really care less about how senior the person I'm learning from this. If they know it better than I do, I'm more than happy to learn.

Milosz:

Absolutely. I think there's also value in, you know, knowing the fundamentals of free energy methods and, basic statistical mechanics, which is what maybe some people these days will skip in favor of, I don't know, statistics, heavy machine learning, right? And so there will be different accents in different generations. And I absolutely think, like, getting inspired by different perspectives both ways is, is of great value. So

Erik Lindahl:

That is very true.

Milosz:

to promote. Okay. Thank you so much for the conversation, for the insights.

Erik Lindahl:

And thanks. It was great joining you.

Milosz:

hope you have a great day.

Erik Lindahl:

Good. Ciao.

Thank you for listening. See you in the next episode of Face Space Invaders.