BJKS Podcast

92. Tom Hardwicke: Meta-research, reproducibility, and post-publication critique

 Tom Hardwicke is a Research Fellow at the University of Melbourne. We talk about meta-science, incuding Tom's work on post-publication critique and registered reports, what his new role as editor at Psychological Science entails, and much more.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.

Support the show: https://geni.us/bjks-patreon

Timestamps
0:00:00: What is meta-science/meta-research?
0:03:15: How Tom got involved in meta-science
0:21:51: Post-publication critique in journals
0:39:30: How Tom's work (registered reports) led to policy changes at journals
0:44:08: Tom is now the STAR (statistics, transparency, and rigor) editor at Psychological Science
0:48:17: How to best share data that can be used by people with different backgrounds
0:54:51: A book or paper more people should read
0:56:36: Something Tom wishes he'd learnt sooner
1:00:13: Jobs in meta-science
1:03:29: Advice for PhD students/postdocs

Podcast links

Tom's links

Ben's links


References & links
Episodes w/ Nosek, Vazire, & Chambers:
https://geni.us/bjks-nosek
https://geni.us/bjks-vazire
https://geni.us/bjks-chambers
Foamhenge: https://en.wikipedia.org/wiki/Foamhenge
METRICS: https://metrics.stanford.edu/
AIMOS: https://www.youtube.com/@aimosinc4164

Chambers & Mellor (2018). Protocol transparency is vital for registered reports. Nature Human Behaviour.
Hardwicke, Jameel, Jones, Walczak & Weinberg (2014). Only human: Scientists, systems, and suspect statistics. Opticon1826.
Hardwicke & Ioannidis (2018). Mapping the universe of registered reports. Nature Human Behaviour.
Hardwicke, Serghiou, Janiaud, Danchev, Crüwell, Goodman & Ioannidis (2020). Calibrating the scientific ecosystem through meta-research. Annual Review of Statistics and Its Application.
Hardwicke, Thibault, Kosie, Tzavella, Bendixen, Handcock, ... & Ioannidis (2022). Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice. Royal Society Open Science.
Hardwicke & Vazire (2023). Transparency Is Now the Default at Psychological Science. Psychological Science.
Kidwell, Lazarević, Baranski, Hardwicke, Piechowski, Falkenberg, ... & Nosek (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS biology.
Nosek, Hardwicke, Moshontz, Allard, Corker, Dreber, ... & Vazire (2022). Replicability, robustness, and reproducibility in psychological science. Annual review of psychology.
Ritchie (2020). Science fictions: Exposing fraud, bias, negligence and hype in science.

[This is an automated transcript that contains many errors]

Benjamin James Kuper-Smith: [00:00:00] You know, one, one kind of funny thing that I had was a funny thought I had was preparing this episode was that I realized that I keep saying when I, you know, invite guests and that kind of stuff that meta science is one of my main topics of the podcast. But in a way that's not actually true. Um, it was kind of only once I started like, you know, preparing this episode that I realized like most of the, you know, that meta science itself has a very specific term, i. 
 
 

e. doing science about science. But what I actually always refer to is just The stuff around science, in a way, like, you know, what it's like to be a journal editor and that kind of stuff. So, I'm kind of looking forward to actually having my first metascience conversation on the podcast. Even though I've been saying I've been having them for, for years now. 
 
 

And I thought Yeah, maybe if we could actually start like very broadly and kind of traditionally by just defining the term just so we have like a slightly better grasp on it [00:01:00] for the rest of the conversation Yeah, how would you define meta science? 
 
 

Tom Hardwicke: Yeah, so meta science, which is sometimes called meta research. So I might just say meta research instead of meta science occasionally. So meta science is a field that uses Scientific methods to study science itself, and it has several goals, trying to describe science and what scientists do, but also to evaluate science and also to try and improve science. And to some extent, meta science is a new field, and to some extent it's a very old field. Um, if you imagine a sort of Venn diagram of different scientific disciplines that all have pretty kind of porous boundaries and researchers move between them to some extent. Meta research or meta science is kind of an umbrella that covers many long standing disciplines like psychology and history of philosophy of science, economics, evidence [00:02:00] based medicine, scientometrics, science and technology studies. These all fall under that kind of umbrella of, of meta science, but it also occupies An empty space of that Venn diagram, at least the way we've been talking about it in the last decade or so, where I think there's been a dramatic rise in meta science and the real beginning of the use of that term prominently in discourse. And that empty space, that I think is somewhat novel about the last decade of meta science, is that it's much more empirically focused and systematic than previous efforts to describe science. And it's also much more, it has a much greater applied focus or translational focus. So a lot of people who are involved in meta science, um, currently actually want to change the way that science happens. 
 
 

So they're not just sitting back and observing what scientists are doing. They are themselves often actually scientists and want [00:03:00] to improve what they perceive to be problems in the scientific ecosystem. So that's kind of, um, a definition in a nutshell. Um, 
 
 

Benjamin James Kuper-Smith: Mm hmm. Yeah, as I said, I just wanted to use that kind of as a brief as a kind of a brief intro I wanted to ask a little bit first how you got into meta science and that kind of stuff Funny enough. I think we might have crossed paths before Uh, because I did UCL in 2015, uh, and I, I just had most of my lectures at the, at, uh, Bedford away, where I'm assuming you were located. 
 
 

So yeah, but maybe we, we actually crossed paths already without realizing it. Um, but I was curious because I saw that you had, you know, I was trying to figure out like how people get into things and see if I can like figure it out from publications, that kind of stuff. So you did your PhD in 2016, uh, called persistence and plasticity in. 
 
 

The human memory system, an empirical investigation of the overriding hypothesis. So very much not meta analysis from what I can tell. But you also had [00:04:00] in 2014 an article Opticon, which I think is like a UCL thing, right? Something like that. It's on your Google Scholar where you attended some sort of symposium in Amsterdam, and you and some, some other greater students summarized it. 
 
 

So maybe. Yeah, why did, why did you attend the symposium, um, on improving scientific practice? Yeah. 
 
 

Tom Hardwicke: uh, at UCL. You probably walked past me looking very grumpy in, uh, in the corridor because, um I was, uh, I was doing my parent, uh, my PhD, um, as you say, on, on memory and, and human learning. So within the realms of cognitive psychology basically. And as I was doing that, I was encountering all kinds of problems that I, I've subsequently realized were, you know, affecting many different scientific disciplines. 
 
 

So, a lack of transparency, um, poor use of statistical methods, inadequate research, design, poor [00:05:00] incentives, et cetera. The problems that meta researchers study. I didn't know meta research was a thing. Um, at the time, I didn't really know open science was a thing. I was on Twitter and, uh, back in the day, Twitter was actually a very nice place to be. 
 
 

There were all these conversations going on about what people often call the replication crisis and related issues. And I was reading about these issues and I was thinking, this is, this is what I'm seeing. This is what I'm seeing in the lab and in my own work and in my field. And it's, so it's not just me, it's like a broader problem. 
 
 

And to some extent that was comforting, right? Cause you know, it's, Oh, it's not just me. It's not just me who can't replicate these studies. Other people are having those problems as well. To another extent, it's horrifying because you kind of hope that, you know, particularly in domains like medicine, you know, I kind of assumed they knew what they were doing. And then you hear about these horror stories and these same problems that are happening there, and you think, oh my god, this is not good. So, um, I guess, yeah, there was this creeping realization which got, you know, [00:06:00] louder and louder and louder during my PhD, um, that there was some serious problems with the way that we do research, not just in my domain, but in many scientific domains. And, um, a few of my fellow PhD students, I'm not sure how we heard about it, but we heard about this, uh, symposium in, in, uh, in Amsterdam. I think it was called something like human factors in research or something, which is quite interesting because it was, 
 
 

Benjamin James Kuper-Smith: with the human factors. I  
 
 

Tom Hardwicke: Uh, one of whom was 
 
 

uh, Eric Jan Wagenmakers, who I later did a postdoc with. He and his colleagues had organized a symposium to talk about the psychological aspects. Of this issue, so how we're infected by incentives and various cognitive biases, like confirmation bias and how can, how that can undermine, uh, the quality of research. So, uh, yeah, me and, uh, I think, uh, four of the PhD students in the department, um, went to the, uh, department admin and we said, you know, is there any funding that could help [00:07:00] us go on this like little trip to Amsterdam and go to this symposium because I don't think really any other. I don't think any members of faculty were particularly interested in this issue. My, uh, my supervisor being an exception, uh, David Shanks. But, uh, yeah, the department admin was just like, Oh yeah, sure. Uh, we'll give you, uh, give you some money for this. So off we went, uh, I 
 
 

Benjamin James Kuper-Smith: to say, it's not a, it's not a huge trip to Amsterdam. 
 
 

Tom Hardwicke: it's not a huge trip, but it felt, it felt quite exciting at the time because we'd all felt, you know, in this, in this kind of little bubble where we thought we were kind of the only people worried because no one really in the UCL faculty seemed to be to care about these issues. 
 
 

Um, so we went over there and now, yeah, it was super exciting. Um, we had all these Talks and it was really kind of inspiring. And, uh, I think part of the deal with the admin was that we'd, we'd write something afterwards to, uh, you know, justify the cost of the trip. And I think it was going to be a blog post or something, but, um, yeah, UCL had this kind of, uh, sort of internal journal called Opticon something or other, and, uh, we ended up writing this little article [00:08:00] for them, which kind of summarized the meeting that we'd been to, and it's really funny. 
 
 

Like reading, reading that back. I think I read it back like a few years ago when I was writing the, um. The more recent review of matter research, which we might talk about in a minute of calibrating the scientific ecosystem of matter research. And many of the themes were still, you know, very relevant. 
 
 

And the structure was, I think, fit quite well with the, uh, the more recent review. So, um, yeah, back in those days, I was starting to get interested in, in matter research, even though I didn't know it was, it was called that at the time. 
 
 

Benjamin James Kuper-Smith: Yeah, was the I mean, what was it like going to the symposium? I mean, you mentioned kind of before this, the soothing and the terror of realizing that this is, you know, you're not the only one with this, with this problem. Did it, uh, I'm just curious because it's, I guess I'm just a few years younger than you, that for me, many of the things around meta science became almost standard in those just very few years in between. 
 
 

So for me, it's kind [00:09:00] of It's almost like not even necessary anymore to go to a conference like that, it feels like. Um, whereas it seems like for you it was much more like a kind of what is this? What's gonna happen? Uh, so I'm just curious what it was like then going to the symposium. 
 
 

Tom Hardwicke: Yeah, that's really funny because I think often a lot of us who were around in those. Early days were, these were not mainstream discussions, and in fact if you were talking about this stuff you were seen as like, you know, a bit odd and, uh, 
 
 

people didn't want to talk about  
 
 

Benjamin James Kuper-Smith: or it's like Ah,  
 
 

Tom Hardwicke: of a nitpick, um, well, you may have heard some of the language that was thrown around in the early days, it was, uh, really vitriolic. 
 
 

People running replication studies were called bullies. If you try to replicate someone's work you're accused of being a bully. It very much felt like you were in a very small minority, and yeah, I guess in a way that's somewhat exciting, you sort of like pull together with a few people [00:10:00] who share your view on things, and now it's kind of funny to think that people are doing research and things like, you know, sharing data and stuff just seems normal to them, I mean that's great, I mean that shows how much progress has been made, but yeah, back in the, back in the day, particularly being a PhD student, I mean imagine, Imagine starting your PhD in a particular discipline and at the same time people are saying that, like, the fundamental assumptions and, like, the methods that you're being taught in your methods class and the statistics that you're, that are in, that are, um, described in the textbooks that you're reading and you're supposed to be learning from are wrong. That's crazy! It's like someone's pulling the rug from under you when you step in the building and Really difficult. I think psychologically as well to, to, to deal with. And unfortunately, a lot of. I think a lot of, um, PhD students have left because of that kind of thing, like they, they were struggling to, to replicate, you know, these big papers in their field, these big studies, reported in supposedly the best [00:11:00] journals, and they thought there was something wrong with them, and, you know, other people thought there was something wrong with them because they couldn't get those same results. Turns out, in many cases, it probably wasn't them, it was a problem with those original studies. But yeah, quite a, quite a difficult time. In a way, an exciting time, because, yeah, like that, um, that particular meeting that we went to was pretty exciting to see that there were people who were trying to do something about it. That was quite exciting. 
 
 

Benjamin James Kuper-Smith: And then, uh, it's always like. Uh, looking at people's, uh, particularly in your case, because you're at UCL, I like looking at the acknowledgments of PhD thesis to see if like There's any overlap or anything like that because often you find the most interesting things there. Uh, interestingly, I didn't actually know any of the people you, you mentioned there. 
 
 

Uh, but I found another sentence that was kind of interesting. Um, and I'd like to hear you elaborate a little bit on it, which was, thank you to Brian Nozick for allowing me to invite myself over to work with him and his team at the Center for Open Science last year. Um, so how did you, how did you invite yourself [00:12:00] over to work there? 
 
 

Tom Hardwicke: Uh, so, um, yeah, so I guess I was, I was learning about this world of open science, and I, I guess I knew, I knew of Brian through Twitter, um, he would not have known me, um, 
 
 

Benjamin James Kuper-Smith: That's great. He's like a social media star. Not a scientist, but like, social media stuff, yeah. 
 
 

Tom Hardwicke: Well, also a very good scientist. Um, the Center for Open Science was, you know, just starting to gain awareness. And, you know, I was aware of it and I'd started sharing data on the Open Science Framework and stuff like that. And I think it was the only place I knew. In the world, that was, you know, well, there was a group of people who were working towards fixing science in some way. And, uh, as part of my PhD funding, I had, I think it was, I think they gave you like a couple of months or something funding to go abroad to, um, I think [00:13:00] that their view was that you'd go and work in a research lab somewhere. Um, But I was like, oh, I could use this to go to the Center for Open Science. So I think I just emailed Brian out of the blue and, um, asked him if I could come and visit. And he said, sure. Then we had to figure out visas and stuff, which was a bit more complicated. But long story short, I ended up going out there to Virginia for, um, I think a month or so. Yeah, and it was awesome. I mean, the Center for Open Science was like, Relatively new back then, so there was like a really exciting atmosphere, and it was very different to the university that, the environment that I was used to. 
 
 

It was more like a start up kind of environment. It was the first time I lived abroad, so that was very kind of exciting as well. Um, I remember, uh, walking back from the Center for Open Science to this house I was staying in one night in, uh, Charlottesville, which is where they're based. And there were like all these, like, fireflies in the air, and it was just really kind of atmospheric and cool, and Um, I had a really fun time there. 
 
 

It was great. Everyone was really [00:14:00] friendly. Um, on my last night, there was a, there was a guy there called, uh, called Billy Hunt. He was like a, a photographer and developer. Super talented guy. He was also a DJ. So, uh, and I'm, I'm a bit of a DJ in my spare time. So me and him put on a DJ set, um, in his photography studio, like round the corner from the Center for Open Science. So we had this kind of party. in his, uh, in his photography studio, which is pretty cool. So yeah, um, I had a great time there and I got involved in, I guess, my first meta research study, um, a study of, um, badges, ironically at the journal Psychological Science, which we might talk about later. And yeah, I learned a huge amount there, even just within a few weeks. 
 
 

Yeah, it was a cool time. 
 
 

Benjamin James Kuper-Smith: Did you have a particular plan of what you were going to do there? Like, I'd like to work on this thing, or was it just, Hey, there's this whole problem and I just want to learn more about everything, or? 
 
 

Tom Hardwicke: Yeah, I had, I had no idea what I was going to do there. They had no idea what I was going to do there, [00:15:00] so I just kind of rocked up and they were a bit like, Oh, what do we, what do we do with this guy? Um, but I, you know, they were super friendly and I just kind of joined all the conversations that were happening. So much going on there, you know, they were talking about the open science framework and, you know, what that interface should be like and how to encourage researchers to engage with open practices. I got involved with something called the pre registration challenge, where basically, they'd been given, Center for Open Science had been given, um, I think it was a million dollars to distribute to people to engage with pre registration. 
 
 

So this is, you know, this is like super early days, like no one in psychology pretty much was pre registering at all. So they were like, how do we just get people to try this? Because if they try it once, they'll realize it's a good thing. So we were handing out these 1, 000 prizes to people if they pre registered for the first time. So I was helping with that and, um, I think, you know, contributing to a study. And yeah, it all worked out pretty well in the end. Oh, and I also, uh, one person there took me on a [00:16:00] trip to Foamhenge, which is a life size replica of Stonehenge. made of foam, like styrofoam, um, super surreal. Uh, and, you know, I think they, I think they thought I would have been to the real Stonehenge because I'm from the UK. 
 
 

And so, you know, obviously I must've seen everything in the UK, but I haven't seen the real one, but this is incredible. It was, uh, yeah. So, you know, that was one of my top experiences as well. 
 
 

Benjamin James Kuper-Smith: Nice, I've never heard of Foamhenge. Now I know I have Sounds like a reason to Fly to the U. S. just for that. 
 
 

Tom Hardwicke: For sure. 
 
 

Check it out.  
 
 

Benjamin James Kuper-Smith: Okay. So you, you finished your PhD and then was it already clear that, okay, I want to not continue doing kind of standard cognitive psychology, that kind of stuff. Did you apply for, or did you apply for stuff in psychology, standard research, or, I mean, you ended up, I think it was immediately after then working with John Ioannidis. 
 
 

Uh, yeah. Kind of what was your Kind of how, how did [00:17:00] you make that decision? I mean also just how many jobs were there in meta science at the time and that kind of stuff. How did you manage that as a late PhD student or  
 
 

Tom Hardwicke: PhD. And at that point, I'm pretty sure I still didn't know what meta research was. And. I was pretty depressed to be honest. I, you know, I'd gone into science thinking it was, you know, all these different scientific ideals like transparency and To some extent, objectivity, trying to be honest in your work, um, trying to, trying to find out the truth, you know, like, rather than just promote yourself, um, things like that, I, I, you know, I thought that was really what science was like, and I had four years of finding out that really it wasn't, um, obviously there were, there were good people, um, I was very fortunate to have a great supervisor. But I was, uh, my field in general, I thought, this is crazy. Like, even if I try and do good work in this field, it's not gonna make a difference, because there's so much crap in the literature. And the lack of transparency was just, [00:18:00] like, shocking. So, you had no idea what was out there. Um, all these, like, negative null results that, you know, you just didn't know about. So I got to the end of my PhD. I, I'm not quite sure. I think I, I think I was thinking, you know, I'm probably gonna need to leave academia, even though I kind of Still loved it. I was still interested in the questions of my PhD thesis and memory, and I was still liked doing experiments and et cetera. And then the way I ended up getting into meta research was really serendipity. 
 
 

There was an organization called BITS, um, the Berkeley Institute for Transparency in the Social Sciences, obviously based at Berkeley University, University of Berkeley in California. And they had this, uh, small grants scheme. I think they've basically been given a pot of money by the, the Arnold foundation who also funded. 
 
 

Center for Open Science and also funded metrics, um, which we'll probably talk about in a minute. And, uh, so they had this pot of money to give out these small grants. And, um, [00:19:00] so I was reading about that and I had this idea to build a platform for a community living meta analysis. So, one thing I was frustrated with is that, you know, you'd read a review of a particular area, and it was already outdated by several years. So my idea was, well, what if we did this online, and every time a new study was published that was relevant to the meta analysis, you fed that in, um, and it automatically updated all the graphs, etc. And you could also put various, you know, Other diagnostic graphs on there, like for publication bias and that kind of thing. So I had this idea for that and, uh, applied for one of these grants and I got an email back saying your application has been rejected. However, somebody else put in an application that was very similar to yours. Do you want us to put you in touch with this person? So I was like, okay, I guess, like, I don't really know what we're going to do with that, but okay. Uh, this person turned out to be, uh, Mike Frank, who is a developmental psychologist at Stanford university. [00:20:00] And so I exchanged a few emails with him about this idea and that didn't end up really going anywhere. But he was like, Oh, have you heard about this postdoc opportunity at Stanford and this group called metrics and metrics is a group that does meta research, um, at the time, possibly. the only group, or one of the only groups in the world that was doing meta research, certainly one of the first ones. So that's the first time I think I came across this term of meta research, and I read about this postdoc position. It was my birthday, and the deadline was the day after my birthday. 
 
 

So I was like, oh, uh, this is not great timing. And also, I've never done any meta research, so why would they ever employ me? And I had a conversation with my supervisor about it, and he said, Well, I'm pretty sure you can kind of, you know, frame what you've done. Like, so you've done some replications. Maybe they consider that to be meta research. Just, you know, just have a shot. I was on the cusp of not applying, um, and I just thought, Oh, to hell with it. I'll, I'll, I'll take a shot at this. [00:21:00] And I did, and ended up getting this postdoc position and going over to, uh, Stanford for a couple of years to work in, in meta research, and that's, everything started there, really, for the meta research side of things. 
 
 

Benjamin James Kuper-Smith: Okay, so you just casually got the only position in meta research available in the world. 
 
 

Tom Hardwicke: I'm not sure if it's the only one, but, 
 
 

Benjamin James Kuper-Smith: But I mean, if, yeah, but if it's the one of the main labs doing it, then there can't have been that many, you know. Yeah, I thought we could, we could start kind of talking about the actual meta research you've done. We could use some examples of, that you've worked on as an example of, you know, in part, this full, this kind of process of meta science and how it works and how it kind of fits into the whole ecosystem. 
 
 

Um, Yeah, so maybe as an example kind of of meta science and how it, yeah, how the results can directly influence kind of how science is done. You have an article on post [00:22:00] publication critique in journals, something I've also talked about a little bit, I think, with Simin Azir, I think we talked a little bit about that. 
 
 

So just, maybe to begin with, what is post publication critique in journals, and uh, what did you do, or what did you find first in, in that paper? 
 
 

Tom Hardwicke: Well, post publication critique. Generally, it's just the idea that after a scientific paper has been published, is there any kind of critical discourse about that paper in the scientific community? And that, of course, can take place in all kinds of places. It takes place on social media. It takes place in journal clubs, conferences, in the Q& A session. 
 
 

It takes place in the corridor when you're chatting with your neighbor about, you know, a recent paper. But most formally, it takes place at scientific journals. Um, so the study we did was looking at post publication critique specifically at scientific journals, and we had a very kind [00:23:00] of wordy operational definition of what we meant by post publication critique in this particular study, but I will save your listeners that and just say, you know, that the prototypical example of what I mean here is a letter to the editor. 
 
 

So, um, if you see a paper published and you say, Hey, there's a, there's an error in their analysis, or they, you know, they, if they tried this alternative analysis, they would've got different results, or I disagree with the interpretation, et cetera. Um, how do you kind of submit that idea to the journal and get it published alongside the original paper so that other people are aware of it? And of course, there are various advantages to doing that. There's advantages in terms of the fact that that could then become part of the scientific record, whereas by contrast, Discussions on social media are transient and kind of ephemeral and aren't part of the scientific record and there's also some element of There being more incentive to do that kind of critique because you get you know Potentially get a publication out of it. So, um, there are many [00:24:00] reasons to think why you know post publication critique at journals is especially Interesting. Uh, yeah, so we decided to do a study about that There's of course lots of anecdotal evidence of people struggling to get their critiques published in journals. You hear lots on, uh, lots on social media but also, um, on the Retraction Watch blog. 
 
 

There's plenty of examples of people who spotted a serious error in a scientific paper. And they've not been able to convince the journal to publish a critique about it. And that paper is essentially just left there without this issue being identified. And, you know, new readers will come to that paper and not be aware of that issue and that, that seems problematic. Yeah, so I, partly motivated by those anecdotal examples, I wanted to do a study of, you know, what are journals formal policies about post publication critique? You know, first and foremost, do they even accept it? Do they have a letter to the editor or similar format that you can actually submit your critiques to the [00:25:00] journal through? Um, and then if they do, do they impose any kind of limits on those critiques? The primary ones being, do they impose length limits? Um, and do they impose time limits? So they might say, we'll only accept post publication critique within. Three months of the original article being published, for example, and time limits I think are particularly interesting because I don't see a good justification for them. 
 
 

Like I'm not quite sure why we think Sci you know, the scientific conversation about this paper would end within three months Or why if you found an error in a paper from 20 years ago that it's not, you know, you shouldn't point that out 
 
 

Benjamin James Kuper-Smith: Yeah, I was gonna ask about that just briefly. There is, is there, I mean, what do the journals provide a, has anyone provided a reason for why there should be this time limit? Is it just so like you don't get, you know, Tons and tons of letters to the editor. Like if you have an old journal that's been around for like 80 years, you don't just get like, you know, bombarded with letters of minor errors that happened 80 years ago, or. 
 
 

[00:26:00] Uh,  
 
 

Tom Hardwicke: that don't have these time limits being bombarded with, uh, letters to the editor about articles from 80 years ago, so I don't, I think it's like, it's, it's a problem in theory, but not really in practice. I, I think the journal's motivation is that they want to promote what they would call timely discourse, like they, they want to be, you know, have their journal be more about the latest articles, but, um, frankly, I think that in a lot of cases, they just simply don't want Errors to be pointed out in the published work because it damages the journal's brand so, uh, I do I do not know that for certain but I suspect that's a Uh, playing into this a lot. Yeah. So, uh, so this was a pretty descriptive study. We also wanted to compare across different scientific disciplines, how the journals in those fields, uh, were handling post publication critique. So we, uh, we basically looked across science, uh, science divided into 22 different scientific disciplines. [00:27:00] And in each of those domains, we took the top 15 journals as ranked by impact factor. 
 
 

So. That's not a perfect measure of anything really, but it's a good indicator of journal prominence. Um, so 15 prominent journals, very prominent journals, in each of these disciplines. Disciplines like medicine and psychology and psychiatry and economics. And we went to those journals websites. And we looked to see whether there was a way of submitting post publication critique, and if so, like, what format was that? 
 
 

So sometimes it might be a letter to the editor, which is a, you know, a formal publication, but usually quite a short one. Sometimes it was a commentary, which is a more kind of extensive form of, um, critique. And sometimes it was just kind of like online comments function, where you could kind of, you know, tap something into a box, a few sentences, and it would be published kind of immediately. Um, and then we also extracted whether, uh, information about whether there were. Uh, length limits, time to submit limits and various other details like peer review and how that was handled and things like that. [00:28:00] And the sort of headline finding is that uh, I think yeah about just over a third of these top journals did not have any way of submitting post publication critique to them. And then of the journals that did offer a form of post publication critique, many imposed limits in terms of time to submit and length. And some of those limits were really strict. Uh, so I think the most restrictive word limit we encountered was 175 words. I don't know if you ever tried saying anything in 175 words, but it's not easy to say anything substantive or evidence based or even particularly useful. And then the strictest time limit was two weeks. So imagine seeing an article that's been published and thinking, I have just two weeks to write this critique, and then after that time, this paper is immune to criticism. Uh, that seems crazy to me. So, uh, yeah, so I, I think that study exposed some weaknesses in the way that journals are handling post publication critique, and we [00:29:00] offer a few suggestions about how they could, um, improve that situation. 
 
 

Although I don't think journal policy is the only issue here. I think there's a lot of kind of Cultural issues as well about how scientists critique each other and understand how to accept critique as well. So, that's a deeper issue. 
 
 

Benjamin James Kuper-Smith: most important question I wrote down is 000 word limit for the critique. I mean, which journal was that? Because that's longer than most of my articles I think I've ever written. 
 
 

Tom Hardwicke: Oh yeah, that was um, that was definitely an outlier and I have no idea really why. 
 
 

Benjamin James Kuper-Smith: I don't know, maybe they just didn't want to limit it, but they didn't want to literally make it unlimited. One, one thing I found kind of interesting was the difference between disciplines. Um, so if I remember correctly, there are some disciplines, I think like medicine or something like that, where pretty much a hundred percent of the journals you surveyed. 
 
 

Uh, offer some [00:30:00] form of, uh, post publication critique, but other areas, I think, like mathematics, it's very uncommon for a journal to do that. So I'm just curious, I mean, almost just why there's these big differences in journals, uh, in areas, and maybe also whether that's, I'm just curious whether that actually says anything important, or they just have different ways of doing it, so it just You know, it looks, the kind of findings you have look kind of like bigger differences than they are in practice, if that makes sense. 
 
 

Tom Hardwicke: I think there are many different factors that could be at play here, um, into why particular disciplines have more or less, uh, post publication critique. And our study did not investigate those, so I do not. know for sure, um, what those factors are and the extent to which they're involved, but speculating, I would think that in mathematics, for example, you know, that's quite a different kind of paper that's going to get [00:31:00] published in mathematics versus medicine or psychology, for example, and the way that they. respond to each other's work will often be like a paper response to a paper and it won't necessarily need to be a small, a smaller kind of critique of a error in a particular study. And another factor is that more of a kind of the extent to which that research has. More kind of imminent applied consequences. So in medicine, if there is a error in a study, like a clinical trial, that's informing clinicians about how they treat patients. It is extremely important that that gets flagged as soon as possible and, you know, shown prominently next to that article. You know, it's less important in the, um, the less applied disciplines that such errors are flagged. 
 
 

So, there's presumably much more, there's been much more momentum historically in medicine for allowing this kind of thing to happen. You know, having said [00:32:00] that, uh, although we found that, uh, In the medical journals, they were much more likely to allow post publication critique in some form. They also had the strictest limits on critique. I think that example I gave you four of the two week time period comes from the Lancet, one of the biggest medical journals. So there's still, uh, you know But yeah, there are many factors involved, I think, in these different, uh, differences between the disciplines. 
 
 

Benjamin James Kuper-Smith: Yeah, it's so weird this like, I mean, Do you have any idea of why they make the time limits so restrictive? Because it, I mean in part, It just seems to me, if I want to be cynical, it seems like it's a way to pretend you're offering it without actually offering it. I don't know, maybe for the Lancet it's a little bit different because it's so, I imagine, uh, in medicine it's so prominent that people might actually, you know, read the articles pretty soon after they appear, but two weeks is still like, I mean, people have stuff to do, right? 
 
 

I mean, I guess [00:33:00] it's, uh, oh sorry, was the Lancet only the word limit, not the time limit. 
 
 

Tom Hardwicke: The Lancet was the, uh, the time limit, but it also had a tight,  
 
 

uh, word limit, I think, of 200 words. So 
 
 

it's, it's pretty bad here. Yeah. 
 
 

Benjamin James Kuper-Smith: Yeah, but like, do you know why they do the, I mean, it's, yeah,  
 
 

Tom Hardwicke: Uh, part of it, part of it as well might be a bit of a holdover from the print era, uh, where journals were predominantly in print and not online, and they would have wanted to publish any such critiques in the same issue as the original paper, um, so they would have wanted to have, have had them, you know, very quickly. So some of it's maybe a bit of a hangover from that. You know, in general, I think a lot of journal policy that you see is basically stuff that was brought in at some point. And everyone's kind of forgotten about and no one's really actively thought, oh, we need to update that or change that, or this is not optimal. It's often also not entirely clear who is responsible. Um, it's not always the editor in [00:34:00] chief who can change those policies. Sometimes it's, uh, some kind of publication board or the publisher and often nobody really seems to know. Who has the power to change those things. Um, so they just sit there unchanged for a long time, even if that's suboptimal. And the hope is the hope of doing this kind of meta research is that the people who do have the power to change those things, see it and realize it's a problem. And they do actually change it. And there is a bit of a gap here. There's a gap between the research and the policy because it's not entirely clear how you fill that. 
 
 

So, you know, once we published that study, it wasn't entirely clear. How do we communicate that to the people who need to hear about it? And, you know, to some extent you could, you know, maybe we could. Do a survey of all of these editors and like let them know about the study and what they think about it, etc. 
 
 

Maybe but you know, many of them are very busy people and they often don't respond to surveys So, um, there is an important gap there and sometimes you're lucky and you do get [00:35:00] a good response to your matter research but um often it's just a kind of Silence and you think oh have I just published another paper that no one's really listens to Um, 
 
 

Benjamin James Kuper-Smith: The answer is  
 
 

Tom Hardwicke: so that's something I yeah, that's something I try to think of more is like how do we When we're planning studies like that, how do we also make a plan for Disseminating that to the people who need to hear it most and there's various ideas about how you can do that more effectively like, you know having some kind of data dashboard or something that you can refer to people to so that the results are a bit more kind of accessible and things like that but 
 
 

Benjamin James Kuper-Smith: I'm curious what you think about where, like one question I had, I was thinking about post application critique is like whether it even makes sense to have it as part of the actual journal, like the original journal that published the original findings. I mean, do you think. That that is the best way to do it or is [00:36:00] You know, maybe some sort of external and independent place like pub here maybe But something like that. 
 
 

Do you think that's maybe a better approach than you know, because the journal is always as you you know They don't want to publish Maybe too many negative things because I think it makes them look bad and that kind of stuff whereas some sort of independent platform Just doesn't suffer from at least some of those problems 
 
 

Tom Hardwicke: so I definitely hesitate to say that journals are the best place for this kind of thing to happen and I think a you know, a plethora of different places where this stuff can happen Post publication critique can happen. It's probably a good thing There are, you know, advantages and disadvantages to these different things though, so, you know, as I said before, social media, instant access, you know, you can, you can put your critique on there immediately, but it depends on how many followers you have, you know, the extent to which people are going to hear about that, and it's ephemeral, it doesn't, it's not part of the scientific record, so when people read the relevant paper, they don't necessarily, they're not necessarily aware of a criticism. [00:37:00] Pub peer, uh, is a, you know, much more structured than that, but again, if people go to the original paper, are they aware of the Um, any relevant comments on pub peer, I think you can get a plugin for pub peer, which will alert you to such comments, but you know, has everyone installed that plugin? No.  
 
 

Um,  
 
 

Benjamin James Kuper-Smith: Also doesn't work that  
 
 

Tom Hardwicke: anyone has. 
 
 

I don't,  
 
 

Benjamin James Kuper-Smith: it after I talked to Elizabeth Bick, um, because she recommended it. And it, it seems like whenever a paper is mentioned somewhere on a webpage, it tells you critiques. So like if you have, sometimes I'll have, I'll be like on, once I was like on a, on a, Podcast website and they had the reference some paper somewhere along the episodes and it said like, there's a comment here. 
 
 

It's like, yeah, so sometimes, or sometimes they will, uh, I'll have added that it says like there's comments for this paper and, but there were comments for like two other papers that that paper cited or something like that. It was a bit weird. So yeah, it's, it's not, it's not optimal. 
 
 

Tom Hardwicke: Yeah. So I think things like pub peer are great. Um, I mean, there [00:38:00] was also the. The case of, uh, PubMed Commons, I don't know if you've heard of that, it was an effort by PubMed to introduce something very similar to PubPear, basically a, um, community commenting platform, and they ultimately decided to shut down because they weren't getting enough comments, so, I think an important step Issue here is incentives like what incentive people have to actually submit a critique to make that critique Known and there are some people like elizabeth bick who are doing tremendous work and publishing Their critiques on pub here, but a lot of researchers don't do that. 
 
 

Um as they're kind of, you know full time focus and if anything They're just going to be disincentivized to, to, to publish a critique, even if they have one, because they potentially, you know, fear repercussions from the original authors. So, they need some kind of carrot, I think, to do it, and the one carrot that journals can offer is a publication in the journal, right? 
 
 

Uh, whether we like it or not, [00:39:00] publications are a de facto currency in the kind of scientific ecosystem, so that is potentially one way to leverage journals. To increase public post publication critique. I mean, I think this does expose a deeper problem about Incentive structures and science and you know, we shouldn't need that We shouldn't need that character to do critique of each other's work. 
 
 

That should be part of the job However, you know being realistic At least in the short term. I think leveraging journals to encourage post publication critique is probably a good idea 
 
 

Benjamin James Kuper-Smith: Yeah, I want to talk a little bit about kind of the effect that meta science. Um, I mean, I think the post publication critique paper is pretty recent, if I remember, but you mentioned, uh, before we started recording that, for example, um, some of the work you've done on open data, uh, has actually led some policy to some policy changes in the journal. 
 
 

Um, also I think Brian Nosek, when I talked to him, he mentioned that. It's one of your, maybe it was this one. I can't remember, but [00:40:00] that it had, I think it was for pre registration or register reports, I think it was register reports that it like laid open some problems that could then be addressed on OSF, uh, yeah, I was just hoping you could maybe comment a little bit on kind of what, what you found, uh, for the open data and kind of how the journal then addressed, um, what you found. 
 
 

Tom Hardwicke: Yeah, sure. So those are, um, those are two different studies. Uh, so maybe I'll briefly talk about the Registered Reports one first. At the time, Registered Reports were pretty new, and they'd only been adopted by a few journals. And, uh, we thought this was a really good time to do a pretty Descriptive study just looking at, you know, how many registered reports are there out there? I think we were thinking of doing something more Extensive like, you know looking at the content of those registered reports and examining various aspects of them But the first stage of that was simply to identify how many registered reports are there, [00:41:00] and how many journals are publishing them. And we ran into all kinds of problems, uh, doing that. 
 
 

So the kind of headline finding of that study is that at the time, most registered reports were not registered. Um, they were not publicly available as the stage one registered report. That isn't actually as bad as it sounds, because I think at the time, You know, there was a pretty small community working on this, and basically what happened was that most registered reports were simply being handled in house by the journals, so the stage one manuscripts, they were still being assessed by the journals at stage one, and there was a, um, you know, a document At the journal, which was the stage one manuscript, however, it wasn't publicly available, so that's not ideal in the long term, because the research community can't verify and check these things, it kind of relies on a small group of people at journals to do that. So we, uh, yeah, so we ended up turning this into a, a, study and looked at How often [00:42:00] this is happening we found problems with The fact that many registered reports didn't even identify themselves properly as a registered report. There was poor metadata so it's quite difficult to do meta research on these things and yeah, I think I was I think I remember being a little bit worried actually just before we published that because You know, it was, it was, I was, um, super excited about, and still am, super excited about registered reports. 
 
 

I think they're a great idea in theory. And yet we'd found these problems with the implementation. And, you know, I spent a long time thinking about how we deliver that message, which is, you know, somewhat nuanced. Um, you know, we're not bashing the theoretical concept. We're just saying there's some kind of implementation problems here. But it turned out to be just like a really nice thing. So we, you know, we published the paper and very quickly a commentary was published by Chris Chambers, um, who I think you've had on the podcast. 
 
 

He's one of the pioneers of the registered reports format. Um, and [00:43:00] David Maller, who is at the Center for Open Science. And those two are, you know, prominent people in the registered reports kind of world, and they coordinate policy for registered reports. I remember hearing that they'd, uh, you know, written a commentary, and often if someone writes a commentary on your paper, it's often a negative thing, and you're thinking, oh god, but Um, you know, I read their commentary and I was just like, oh, you know, such a relief. 
 
 

And they, they were like, oh, this is great. Like, thank you for doing research on this thing. And, you know, it's really cool to hear, uh, not really cool to hear there are problems. No one wants to hear there are problems, but it's great that the research was done and that we now know about these problems. 
 
 

And, um, here's what we're going to do. To correct it and in fact, we've already started and here are the results of that So they were reaching out to journal editors and making them aware of the problem They created a central registry, um, so that journals would use that instead of keeping everything internal and this all happened really rapidly Um can't remember exactly but within a few months or something so that was a A very encouraging [00:44:00] example of how meta research can have, can lead to changes, um, in the scientific ecosystem. 
 
 

So I like, I like that example. 
 
 

Benjamin James Kuper-Smith: So one thing that happened between me asking you to come on the podcast and us not actually talking is that, well, first, I mean, Razia was named as well now already the new editor of psych science. And secondly, you were named, you had a kind of. Special kind of role, I feel like, uh, with the title of Senior Editor for Statistics, Transparency and Rigour at, yeah, at Psychological Science, obviously. 
 
 

You have kind of this like slightly separate position, uh, in the whole editorial board. Um, so I'm curious, uh, I mean, I guess you've, we're recording this on the 11th of January, so I guess you've been officially working in this, as part of this for 10 days now. Yeah, what, what exactly does that role entail? 
 
 

Uh, yeah, maybe that's just a question.[00:45:00]  
 
 

Tom Hardwicke: Yeah, so, um, super exciting, of course, that Samin's now the Editor in Chief of Psychological Science, and As part of that, she wanted to improve, um, how the journal handles various issues related to statistics, transparency, rigor, and ethics. Um, ethics is not included in my title because then the acronym would have been STAIR instead of STAR, which would have been upsetting, I think. 
 
 

Benjamin James Kuper-Smith: Here's the staring editor here. Yeah, 
 
 

Tom Hardwicke: yeah, so the, the team that I'll be leading, basically our remit is to provide specialized assistance on anything that fits under that umbrella. And, uh, part of that activity is going to be doing routine checks on mainly transparency. So, um, has the data being shared, et cetera, is the [00:46:00] study preregistered, things like that. And, uh, another aspect of the team is to be available to give ad hoc advice to, um, the editors who are handling papers. So if they encounter, um, you know, a statistical model that they're, they don't know much about, then they can put out the bat signal, if you like, um, to the STAR team. And then someone from our team will help them, and that might just be kind of giving them some informal advice about a particular specific question. Or it might be providing a full blown, specialized review of that particular issue. That's a problem. And part of the inspiration for that actually comes from, uh, so one thing I'm pretty keen on is identifying things that are done in some fields that seem to help improve rigor, quality, et cetera, that aren't being used in others. 
 
 

And when I was in the metrics group, most of them work in medicine, not in psychology. And it's actually very common in medicine for journals to have dedicated statistical reviewers or editors. [00:47:00] And we did a survey of that. We did a comparative survey in psychology and found that most psychology journals don't have this. So now we have this opportunity to actually put it into practice. Um, so I'm really excited. We have this, uh, great team put together to, yeah, help, like, address all of these issues. Yeah, that's it in a nutshell. 
 
 

Benjamin James Kuper-Smith: I think it's, uh, it's really cool and interesting because I mean, you know, I'm just, well, I just finished my PhD, but I, you know, have fairly limited experience with peer review and that kind of stuff. And one thing I always find slightly weird is that, for example, I have no evidence that anyone ever read any of my pre registrations. 
 
 

Even though I've pre registered, like, several, you know, for each of the paper, most of them I had pre registrations. And. You know, sometimes, because sometimes you have like slight deviations from them or something like that, and you think like, oh, how are they, how are they going to react to that? But no one's ever commented on anything in the pre registrations. 
 
 

In general, it feels like a lot of the kind of open [00:48:00] science things or like reproducibility, that some of the new tools we use are actually not being addressed during peer review. And It's kind of cool that it seems like Science Now is taking a bit of a stand to try and actually do a little bit more about that. 
 
 

Yeah. One thing I had a question about, and this is kind of a fairly generic question, but one that I want, you might have thought about a bit more than most people given that this is now your, you're working as part of this, is that what do you do about different data formats and programming languages, these kind of things, because I've submitted papers and then people said like, oh, thanks for the data, but I don't use MATLAB, so I can't use it. 
 
 

So now, you know, I always then upload it as the data as a CSV file also, but obviously all my code is written in MATLAB and that kind of stuff. So I'm just curious kind of what your thoughts are on that, because especially something like Psyche Science, you must get submissions with all sorts of data formats and [00:49:00] programming languages and that kind of stuff. 
 
 

So I'm just Yeah, is that just kind of a problem of the field and you just solve it by having expertise in your team of the different approaches that people might have or kind of how do you deal with that? 
 
 

Tom Hardwicke: Yeah, that's a great question. So, currently, I don't know what variety of different formats we're going to receive, because sharing of analysis code was not previously a requirement at Psychological Science, so we, that's a requirement that we've just brought in, um, so I'm yet to see what we'll get, but, um, as you say, I expect there is going to be done. Uh, at least some variety there. We're going to see some MATLAB scripts. We're going to see some Python. We're going to see some R. We're going to see some SPSS syntax, uh, JASP modules, et cetera. All kinds of things. And yeah, it's difficult. We, uh, I mean, essentially the way we're going to handle that is to. Strongly encourage authors to at least use open formats where [00:50:00] possible, um, so sharing data in a CSV file, um, for example, um, that means that it's maximally interoperable with other software programs. In other words, other software programs can read, um, the CSV file, whereas they wouldn't be able to read a MATLAB file, for example, necessarily. So the, the, uh, the more that researchers can do that, the better. However, I think we do need to be flexible to some extent. And, you know, authors, um, we'll need to use the tools that, you know, they have access to themselves and that they've got training on, et cetera. We can't just expect everybody to use. 
 
 

You know, the pipeline that I prefer, for example, um, which would make my life easier, but, you know, that's just not feasible. So yes, um, essentially we need to have a team with diverse skills and access to different, um, software program, uh, programs so we can handle that, uh, in the event that we don't, uh, we'll probably, you know, need to call upon the community to find, you know, recruit ad hoc, uh, star team members, uh, [00:51:00] to come in and, uh, uh, help us with those particular cases. 
 
 

Yeah. 
 
 

Benjamin James Kuper-Smith: Yeah, and I'm just curious whether you have any advice because I guess my problem is a little bit that I Work kind of in part of the some of the review like it seems like because it's quite interdisciplinary Some people come from like social psychology and they don't know how to program and I've never used github and some people are you know? 
 
 

Almost hardcore computational neuroscientists. So like I mean, I've literally submitted I've got review comments or someone said like You know, we already uploaded a CSV so they could access it. And then they said, Oh, I can't, don't know how to download a CSV from GitHub. I couldn't do it. And it feels like to me, you know, it feels a little bit like one of the situations where, I mean, you just obviously usually can't please everyone, but it feels like no matter what you do, you're going to get criticized from one or the other direction. 
 
 

Do you, do you just have any from experience, any advice there of like how people in this kind of situation should do it? 
 
 

Tom Hardwicke: Um, I [00:52:00] mean, there are obviously, you know, particular tools that you can use, and if you're in the position where you can learn new tools, like if you're, you know, particularly if you're at the beginning of your career, and you're, um, you still have some kind of choice in the matter, then any kind of open tool is generally going to be better for this kind of thing than a closed tool, so. R would be better than MATLAB, for example, because it's much more accessible to many more people. But, you know, in general, to some extent, you know, probably stuck with what you, you already have had training on if you don't have the time to learn something new. And in that case, I think the best you can do is to provide very good documentation. 
 
 

And currently, at least in the meta research studies I've done, you find that although people may be willing to share data and code, it's often not very well documented. And I think that's not, uh, you know, often the problem there is that they just don't know What information someone else would need to rerun that code or understand that data because they've never been in that [00:53:00] person's shoes. They've always been in the shoes of the person writing the code in the first place, not in the person who's trying to interpret it afterwards. So one thing I've been really keen on is trying to incorporate good practices for writing reproducible papers into the student curriculum. And we did this a little bit, um, did this with Mike Frank at Stanford with one of the graduate courses there. 
 
 

And one of the exercises we'd get students to do is to try and reproduce the analysis in a published paper, um, and to, you know, experience it from that perspective. They then appreciate how difficult it is. And then when they write their own analyses, they can put themselves in those shoes and be like, Oh, okay. 
 
 

I need to. You know, this is how to set up the working directory. This is how to provide a good code book so they can figure out what my data means, et cetera. Um, so a lot of it is about, yeah, taking on that other perspective. And one thing we're advising, um, through psychological science is that before you submit your paper to the journal, have one other [00:54:00] person in your team, or maybe even an independent person try and reproduce the results. So just give it to a colleague and say, can you reproduce this? And. I think in the vast majority of cases, they're going to find all kinds of issues, probably minor issues, 
 
 

um, and they'll send you this list and say, look, I couldn't figure out what you meant here, or the, you know, just like when you get feedback on your written manuscript, and people say, I have no idea what you're talking about in this sentence, and you're like, nope, it's perfectly clear, I wrote it, I know what I'm doing, but an outsider, you know, isn't in your head, uh, and so there's all these assumptions that you made that they don't have, and it's the same thing for reproducibility. So, yeah, if you just have one other person who didn't write the code and didn't, you know, create the original data file, take a look at it, that will often expose lots of the, uh, the problems, um, and help you to kind of fix those, yeah. 
 
 

Benjamin James Kuper-Smith: the end of each interview, I asked three recurring questions, the same three recurring questions. Uh, the first question is, what's a book or paper you think more people should read? [00:55:00] Uh, this can be famous or completely unknown, new or completely old. Just something you think people should read more. Wait,  
 
 

Tom Hardwicke: a really nice example, and I told a friend of mine, and she told me it was too pretentious, so, I've changed, I've changed my suggestion. 
 
 

Benjamin James Kuper-Smith: I want to get  
 
 

Tom Hardwicke: Uh,  
 
 

Benjamin James Kuper-Smith: too pretentious scientifically or 
 
 

Tom Hardwicke: I don't know, it's quite, it was quite an old thing. Anyway, I'm 
 
 

gonna give you a better one. Um, so, uh, I'm gonna recommend a book called. Science Fictions, um, which is by Stuart Ritchie, and I think it's probably the best introduction to this topic of meta research and problems with the quality of scientific research. It's a very accessible book, it's written with a lot of kind of interesting case studies from lots of different disciplines, so like some from psychology, some from medicine, some from other places. And it's really well written. It's very engaging and, uh, yeah, a great [00:56:00] introduction, uh, to this topic. Um, I also listen to it as an audio book, so if you do that, you get the advantages of Stewart's very soothing Scottish accent. Um, so I'd recommend that, uh, yeah, that's my recommendation for a 
 
 

Benjamin James Kuper-Smith: and if you want to hear more of that soothing accent, I did an entire episode with Stuart in large part about that book also. So yeah, it's funny, I'm just trying to, I have a, keep a brief list of like what people said before, and I think this is now the second time I've, I'm always proud if I've read something that people recommend. 
 
 

I think this is the second time now it's happened. The second question is, uh, something you wish you'd learned sooner, uh, this can be from your private work life, whatever you want, just something you think that if you'd learned that sooner, your, your life might have You know, a little bit nicer. 
 
 

Tom Hardwicke: Uh, so one thing I really wish I'd learned sooner. Is that I really, really like podcasts. Um, and I didn't, I didn't realize this until the pandemic [00:57:00] started. And I was, you know, taking long walks to get out of the house and get some exercise. And, you know, that was. really kind of boring. And then I, I think, I think I started with audiobooks actually, and then transitioned into podcasts. But now I listen to them, like, almost all the time. Um, uh, they're just, they're like magic. So, you know, you'll be doing all these boring chores around the house, or shopping for groceries, or whatever. And then you've just got, like, all of this interesting stuff going on in your head at the same time. What an incredible invention. I can't ima I can't imagine, like, looking back what I was doing with my life when I wasn't listening to podcasts all the time. Um, so, that for me is a a huge thing that I wish I'd learned sooner in my life. 
 
 

Benjamin James Kuper-Smith: You can get rid of the terror of being on by yourself with your own thoughts. 
 
 

Tom Hardwicke: Oh yeah. 
 
 

Benjamin James Kuper-Smith: Yeah, I don't know. Yeah, sometimes I wonder whether to actually have you found that, um, sometimes I [00:58:00] feel like if I listen. too much to podcasts. It, you know, you, you need those moments of just like being alone with your thoughts and not thinking about anything and just doing something mundane for half an hour. 
 
 

Uh, I don't know whether you've reached that level of listening to podcasts, but I've actually somehow stopped listening a lot less to podcasts because yeah, I find sometimes it can, it's just a lot of talking going on. Maybe I'm just not used to that. I don't know. 
 
 

Tom Hardwicke: Heh heh. 
 
 

Benjamin James Kuper-Smith: Um, yeah. 
 
 

Tom Hardwicke: no. I haven't had that problem. I'm obsessed with them. Um, and I don't need to listen to my own internal monologue. I much prefer listening to other 
 
 

people saying interesting things 
 
 

Myself.  
 
 

Benjamin James Kuper-Smith: Maybe that's why I started a podcast, so I can kind of do, you know, have podcasts, but also, I mean, I don't listen to my own ones, that would be slightly psychopathic. Um, do you have any recommendations? Any ones you've been listening to recently a lot that you think people should check out? 
 
 

It's 
 
 

Tom Hardwicke: Well, I hope you don't take [00:59:00] it personally, but I don't listen to a lot of science podcasts.  
 
 

Benjamin James Kuper-Smith: not a  
 
 

Tom Hardwicke: you know, I'm, I'm, I'm doing that every day. So I, I think, uh, I need a bit of a break from it. Uh, I listen to lots of, you know, news post podcasts, but that's probably quite boring. I have a favourite comedy podcast called Three Bean Salad, which, you know, probably just depends on what kind of humour you're into, but it's basically just three blokes having a chat. And I really like this podcast called, uh, Song Exploder, where they basically take a particular song and they invite the artist who created the song to come on the show and break down Exactly what their thought process was as they were creating that song and they often interleave it with Like different parts of the song so they'll start off with like, you know Just playing the bass line and then they'll talk you through why they decided to do it that way And then they'll add in the drums and slowly like over the episode The song builds up and is created into its final version, which you hear like right at the end  
 
 

[01:00:00] Um, so that's that's one of my favorite podcasts. 
 
 

I love it. 
 
 

Benjamin James Kuper-Smith: okay, I might check them out then. Although, as I said, I don't listen to that many anymore. Uh, I guess I do enough of them right now. Maybe it's like you with science. Um, anyway, uh, final question is advice for PhD students on postdocs. And actually, I thought I'd just shift in another kind of related question here just briefly. 
 
 

Uh, because a few weeks ago I was talking to someone who said, I think his girlfriend was just finishing a PhD and wanted to do something like MetaScience. And he said she was really struggling to find like positions she could apply for and she said she talked to postdocs in the field and they also were struggling to find positions. 
 
 

Um, I was really surprised by that. I'm just curious, like, is that I always thought like this is a really like booming field where there must be lots of opportunities. I was just curious, given that we're talking about, you know, PhD students on postdocs, are there many positions out there? And if so, do you know how to find them or is it actually still like a very small field? 
 
 

Tom Hardwicke: It is a very [01:01:00] small field. There are not a lot of job opportunities There are especially not a lot of job opportunities after the postdoc stage. So that's something, you know, I'm facing right now. Um, is that having done several postdocs in meta research, I'm now looking at, oh, there aren't any faculty positions in meta research. 
 
 

So where do you go? So if you do want to do a postdoc in meta research, I think you need to, you know, bear that in mind. And if anyone's like generally interested. In, uh, yeah, careers in meta research, then there's, there's a couple of panel discussions, uh, which you can probably find on YouTube by, uh, Amos, the association for interdisciplinary meta research and open science. And I was on one of those panel discussions. Um, so you'll get to hear from people who. aren't me as well, which is definitely a good thing. Um, and uh, here are all of these different perspectives on careers in meta [01:02:00] research. That's a really good place to check out. In terms of just finding individual opportunities, um, I don't know of any kind of central place for that. 
 
 

Um, I often hear about them On social media, uh, Twitter, or BlueSky, and if I see them, I usually retweet them. So, you can, uh, probably find some in my feed. I think Metrix is actually currently advertising a postdoc fellowship, which would have been the same thing I did many years ago. Um, although the deadline is very soon. 
 
 

So, maybe, maybe possibly after the podcast comes out, we need to rush this podcast out so people can hear about it. Um, but that's that they normally come out once a year. So keep your eye out for those. Um, yeah, I think it's, it's a great thing to do. It's such an exciting area. It's important to be aware of how limited the opportunities are. And there are various, you know, strategies you can adopt. Like you could try to keep doing [01:03:00] some. area in whatever, you know, other domains. So for me, for example, I probably should have kept doing a lot of research in memory as well as my meta research work. And then I would have had a much more, uh, balanced CV if you like. 
 
 

Um, I think that's probably a good strategy to adopt at the moment if you're particularly, you know, concerned about. Which most people are, right? So, um, just something to be aware of, yeah. 
 
 

Benjamin James Kuper-Smith: Okay. Uh, so yeah, then to my, my kind of standard question, uh, you know, any advice for, I don't know, maybe that was your device, but, uh, any advice for PhD students or postdocs or people kind of at that transition. 
 
 

Tom Hardwicke: Yeah, so, I generally find it quite difficult to give advice without knowing, you know, the individual situation, 
 
 

um, that I'm speaking to. But I did kind of reflect on this question and think what is one thing I've learned. You know, over that transition from PhD to postdoc that at least I've found useful and, you know, [01:04:00] so people can take what they want from this. And that's the, you know, a big part of moving from being a PhD student to a postdoc is that there's this big emphasis on becoming more independent. Having said that, I think we all need advice and support throughout our careers, and definitely still at that stage. There are so many unwritten rules of academia to learn about, and I think it just makes you a better scientist if you have. Good critical feedback on your work But the more and more independent you become the more difficult it can actually be to get those things and to some extent the more Difficult it it feels to ask for other people for those things because you kind of think oh Maybe I should probably know that already. Um, I don't want to you know, ask questions about this stuff. So my advice is to Is that you know as you're making that transition to? greater independence, perhaps ironically, you need to start becoming much more proactive about finding support because you can't just [01:05:00] rely on your PhD supervisor anymore. And your postdoctoral advisor might be expecting you to be quite independent. So find other mentors that can give you advice, both, you know, career advice, but also feedback on your, on your work and your research on your ideas. Build a network of people, get many different perspectives because everyone has different experiences and it's good to learn different things from different people and also rethink. Mentorship a bit. So the traditional idea of mentorship is that it's top down and it's unidirectional. So you ask someone more experienced than you to provide you with advice, but actually there are, you know, I have some really good mentoring relationships where it's a two way discussion and they're not necessarily more experienced than me, it might be another postdoc. 
 
 

Who I exchange drafted papers with and you know, they don't even work in my area Maybe but they they can provide advice on the the quality of the writing etc and also from students, so [01:06:00] the traditional idea is that you know, if you're a postdoc you would be mentoring students, but Students can mentor you as well, um, if you, you know, listen to them and think about, uh, you know, how they are experiencing your supervision or, um, the quality of your writing, your ideas, etc. 
 
 

You can have really good discussions with them as well. So think about mentorship in a kind of multi directional way and seek out those people who will provide you honest critical feedback on your work because the more You become formally more independent in your career I think the more difficult it is for that kind of thing to happen spontaneously so you have to be proactive about it That's my advice 
 
 

Benjamin James Kuper-Smith: Okay. Well, uh, I think that that's all for me, unless you have anything else you want to add, I'll just say, thank you very much. 
 
 

Tom Hardwicke: Yeah, this is awesome. Thanks so much