Brian Nosek
👤 PersonPodcast Appearances
We can't say with any confidence where it's most prominent. We can only say that the incentives for doing it are everywhere. And some of them gain more attention because, for example, Francesca's findings are interesting. They're interesting to everyone. So of course, they're going to get some attention to that. Whereas the anesthesiologist's findings are not interesting. They put people to sleep.
We can't say with any confidence where it's most prominent. We can only say that the incentives for doing it are everywhere. And some of them gain more attention because, for example, Francesca's findings are interesting. They're interesting to everyone. So of course, they're going to get some attention to that. Whereas the anesthesiologist's findings are not interesting. They put people to sleep.
Until they kill you. Well, yeah, I guess they put you, and then they kill you.
Until they kill you. Well, yeah, I guess they put you, and then they kill you.
Yeah, I would say that there is a lot of attention to social psychology for two reasons. One is that it has public engagement interest value. That's a way of saying that people care about your findings. people care about, at least in the sense of, oh, that's interesting to learn, right? But the other reason is that social psychology has bothered to look.
Yeah, I would say that there is a lot of attention to social psychology for two reasons. One is that it has public engagement interest value. That's a way of saying that people care about your findings. people care about, at least in the sense of, oh, that's interesting to learn, right? But the other reason is that social psychology has bothered to look.
And I think social psychology became a hotbed for this because the actual challenges in the social system of science that need to be addressed are social psychological problems. What do you mean by that? I mean like the reward system, how it is that people might rationalize and use motivated reasoning to come to findings that are less credible.
And I think social psychology became a hotbed for this because the actual challenges in the social system of science that need to be addressed are social psychological problems. What do you mean by that? I mean like the reward system, how it is that people might rationalize and use motivated reasoning to come to findings that are less credible.
A lot of these problems are ones that social psychologists spend every day thinking about.
A lot of these problems are ones that social psychologists spend every day thinking about.
The case that people make to say that this is a bigger problem now is that the competition for attention, jobs, advancement is very high, perhaps greater than it's ever been.
The case that people make to say that this is a bigger problem now is that the competition for attention, jobs, advancement is very high, perhaps greater than it's ever been.
Yeah. So there are many more people and many fewer positions, which is an obvious challenge for a competitive marketplace. And there are now pathways for public attention that have even bigger impact. Academics, by and large, didn't think about ways to get rich. They looked for ways to have time to think about the problems that they want to think about. But now they have pathways to get rich.
Yeah. So there are many more people and many fewer positions, which is an obvious challenge for a competitive marketplace. And there are now pathways for public attention that have even bigger impact. Academics, by and large, didn't think about ways to get rich. They looked for ways to have time to think about the problems that they want to think about. But now they have pathways to get rich.
Fraud has existed since science has existed. And that's primarily because humans are doing the science. And people come with ideas, beliefs, motivations, reasons that they're doing the research that they do. And in some cases, people are so motivated to advance an idea or themselves that they are willing to change the evidence fraudulently to advance that idea or themselves.
Fraud has existed since science has existed. And that's primarily because humans are doing the science. And people come with ideas, beliefs, motivations, reasons that they're doing the research that they do. And in some cases, people are so motivated to advance an idea or themselves that they are willing to change the evidence fraudulently to advance that idea or themselves.
Our funders include NIH, NSF, NASA, and DARPA as federal sources, and then a variety of private sources such as the John Templeton Foundation, Arnold Foundation, and many others. And that diverse group of funders – and it's quite diverse – I think share the recognition that the substantive things that they are trying to solve –
Our funders include NIH, NSF, NASA, and DARPA as federal sources, and then a variety of private sources such as the John Templeton Foundation, Arnold Foundation, and many others. And that diverse group of funders – and it's quite diverse – I think share the recognition that the substantive things that they are trying to solve –
won't be solved very effectively if the work itself is not done credibly.
won't be solved very effectively if the work itself is not done credibly.
There are specific cases where a finding gets translated into public policy or into some type of activity that then ends up actually damaging people, lives, treatments, solutions. One of the most prominent examples is the Wakefield scandal relating to development of autism and the notion that vaccines might contribute to
There are specific cases where a finding gets translated into public policy or into some type of activity that then ends up actually damaging people, lives, treatments, solutions. One of the most prominent examples is the Wakefield scandal relating to development of autism and the notion that vaccines might contribute to
And that has had an incredibly corrosive impact on public health, on people's beliefs about the sources of autism, the impacts of vaccines, et cetera. And that is very costly for the world. There's also a local cost in academic research, which is just a ton of waste.
And that has had an incredibly corrosive impact on public health, on people's beliefs about the sources of autism, the impacts of vaccines, et cetera. And that is very costly for the world. There's also a local cost in academic research, which is just a ton of waste.
So even if it doesn't have public downstream consequences, if a false idea is in the literature and other people are trying to build on it, it's just waste, waste, waste, waste.
So even if it doesn't have public downstream consequences, if a false idea is in the literature and other people are trying to build on it, it's just waste, waste, waste, waste.
Yeah, I have always had an interest in how to do good science as a principled matter. And in doing that, we in the lab would work on developing tools and resources to be more transparent with our work, to try to be more rigorous with our work, to try to do higher powered research. more sensitive research designs.
Yeah, I have always had an interest in how to do good science as a principled matter. And in doing that, we in the lab would work on developing tools and resources to be more transparent with our work, to try to be more rigorous with our work, to try to do higher powered research. more sensitive research designs.
And so I wrote grant applications to say, can we make a repository where people can share their data? This is like 2007. And they would get polarized reviews where some reviewers would say, this would change everything. It'd be so useful to be more transparent with our work. And others saying, but researchers don't like sharing their data. Why would we do that?
And so I wrote grant applications to say, can we make a repository where people can share their data? This is like 2007. And they would get polarized reviews where some reviewers would say, this would change everything. It'd be so useful to be more transparent with our work. And others saying, but researchers don't like sharing their data. Why would we do that?
And why would researchers not want to share their data? Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication
And why would researchers not want to share their data? Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication
I have a very strong sense of, oh, my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
I have a very strong sense of, oh, my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
Yeah. And here's the irony is that almost every academic would say, of course, science is supposed to be transparent. Of course, we're doing research for the public good. Of course, this is all to be shared. But come on, Pollyanna, we live in a world, right? The reality here is that there is a reward system and I have to have a career in order to do that research.
Yeah. And here's the irony is that almost every academic would say, of course, science is supposed to be transparent. Of course, we're doing research for the public good. Of course, this is all to be shared. But come on, Pollyanna, we live in a world, right? The reality here is that there is a reward system and I have to have a career in order to do that research.
And so, yes, we can talk all about those ideals of transparency and sharing and rigor, reproducibility. But if they're not part of the reward system, you're asking me to either behave by my ideals and not have a career or have a career and sacrifice some of those ideals.
And so, yes, we can talk all about those ideals of transparency and sharing and rigor, reproducibility. But if they're not part of the reward system, you're asking me to either behave by my ideals and not have a career or have a career and sacrifice some of those ideals.
And in the end of that, 2015, we published The Findings, which was a 270 co-author paper of 100 replications of findings from three different journals in psychology. We got a little less than half of the findings successfully replicated.
And in the end of that, 2015, we published The Findings, which was a 270 co-author paper of 100 replications of findings from three different journals in psychology. We got a little less than half of the findings successfully replicated.
Little less than half of the findings successfully replicated.
Little less than half of the findings successfully replicated.
A year and a half ago, we published the results of the reproducibility project in cancer biology doing the same kind of process and found very similar results. Less than half of the findings in preclinical cancer research replicated successfully when we tried to do so.
A year and a half ago, we published the results of the reproducibility project in cancer biology doing the same kind of process and found very similar results. Less than half of the findings in preclinical cancer research replicated successfully when we tried to do so.
That doesn't mean that the original finding is necessarily wrong. We could have screwed something up in the replication. Successfully replicating doesn't mean the interpretation is right. It could be that both the findings have a confound in them, but we just are able to repeat the confound.
That doesn't mean that the original finding is necessarily wrong. We could have screwed something up in the replication. Successfully replicating doesn't mean the interpretation is right. It could be that both the findings have a confound in them, but we just are able to repeat the confound.
Fraud is the ultimate corrosive element of the system of science because as much as transparency provides some replacement for trust – You can't be transparent about everything. So the ideal model in scholarship is that you can see how it is they generated their evidence, how they interpreted their evidence, what the evidence actually is, and then independent people can interrogate that.
Fraud is the ultimate corrosive element of the system of science because as much as transparency provides some replacement for trust – You can't be transparent about everything. So the ideal model in scholarship is that you can see how it is they generated their evidence, how they interpreted their evidence, what the evidence actually is, and then independent people can interrogate that.
And so to the extent that fraud intrudes and actually the evidence isn't It isn't actual evidence. Then the whole edifice of that scholarly debate and tangling with ideas falls apart because you're actually tangling with ideas that aren't based on anything. How familiar are you with the Joachim Bolt situation? I'm not recalling that name, but I may know the case if you describe it.
And so to the extent that fraud intrudes and actually the evidence isn't It isn't actual evidence. Then the whole edifice of that scholarly debate and tangling with ideas falls apart because you're actually tangling with ideas that aren't based on anything. How familiar are you with the Joachim Bolt situation? I'm not recalling that name, but I may know the case if you describe it.
We feel like when we're in cultures that there is no way for any of us to change the culture. It's a culture. My God, how could we change it? But we also recognize that cultures are created by the people that comprise them. And the notion that we collectively can actually do something to shift the research culture, I think, has spread.
We feel like when we're in cultures that there is no way for any of us to change the culture. It's a culture. My God, how could we change it? But we also recognize that cultures are created by the people that comprise them. And the notion that we collectively can actually do something to shift the research culture, I think, has spread.
And that spreading has actually accelerated the change of the research culture for the better.
And that spreading has actually accelerated the change of the research culture for the better.
Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication, I have a very strong sense of, oh my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication, I have a very strong sense of, oh my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
I asked Nosik how he thinks this culture can be changed. So, for example, we have to make it easy for researchers to be more transparent with their work. If it's really hard to share your data, then adding on that extra work is going to slow down my progress. We have to make it normative. People have to be able to see that others in their community are doing this. They're being more transparent.
I asked Nosik how he thinks this culture can be changed. So, for example, we have to make it easy for researchers to be more transparent with their work. If it's really hard to share your data, then adding on that extra work is going to slow down my progress. We have to make it normative. People have to be able to see that others in their community are doing this. They're being more transparent.
They're being more rigorous so that instead of us saying, oh, that's great ideals and nobody does it, you say, oh, there's somebody over there that's doing it. Oh, maybe I could do it too. We have to deal with the incentives. Is it actually relevant for my advancement in my career to be transparent, to be rigorous, to be reproducible? And then we have to address the policy framework.
They're being more rigorous so that instead of us saying, oh, that's great ideals and nobody does it, you say, oh, there's somebody over there that's doing it. Oh, maybe I could do it too. We have to deal with the incentives. Is it actually relevant for my advancement in my career to be transparent, to be rigorous, to be reproducible? And then we have to address the policy framework.
If it's not embedded in how it is that funders decide who to fund, institutions decide who to hire, journals to decide what to publish, then it's not going to be internally and completely embedded in the system.
If it's not embedded in how it is that funders decide who to fund, institutions decide who to hire, journals to decide what to publish, then it's not going to be internally and completely embedded in the system.
Yeah, so the idea is you register your designs and you've made that commitment in advance. And then as you're carrying out the research, if things change along the way, which happens all the time, you can update that registration. You can say, here's what's changing. We didn't anticipate that going into this community was going to be so hard and here's how we had to adapt. That's fine.
Yeah, so the idea is you register your designs and you've made that commitment in advance. And then as you're carrying out the research, if things change along the way, which happens all the time, you can update that registration. You can say, here's what's changing. We didn't anticipate that going into this community was going to be so hard and here's how we had to adapt. That's fine.
You should be able to change. You just have to be transparent about those changes so that the reader can evaluate. And then those data are timestamped together? Exactly. Yeah. You put your data and your materials. If you did a survey, you add the surveys. If you did behavioral tasks, you can add those.
You should be able to change. You just have to be transparent about those changes so that the reader can evaluate. And then those data are timestamped together? Exactly. Yeah. You put your data and your materials. If you did a survey, you add the surveys. If you did behavioral tasks, you can add those.
So all of that stuff can be attached then to the registration so that you have a more comprehensive record of what it is you did.
So all of that stuff can be attached then to the registration so that you have a more comprehensive record of what it is you did.
It makes fraud more inconvenient. And that's actually a reasonable intervention. I don't think any intervention that we could design could prevent fraud in a way that doesn't stifle actual legitimate research. We just want to make visible all the things that legitimate researchers are doing so that someone that doesn't want to do that extra work has a harder time.
It makes fraud more inconvenient. And that's actually a reasonable intervention. I don't think any intervention that we could design could prevent fraud in a way that doesn't stifle actual legitimate research. We just want to make visible all the things that legitimate researchers are doing so that someone that doesn't want to do that extra work has a harder time.
And eventually, if everything is exposed, then the person who would be motivated to do fraud might say, well, it's just as easy to do the research the real way. So I guess I'll do that.
And eventually, if everything is exposed, then the person who would be motivated to do fraud might say, well, it's just as easy to do the research the real way. So I guess I'll do that.
So in the standard publishing model, I do all of my research. I get my findings. I write it up in a paper and I send it to the journal. In that model, the reward system is about the findings. I need to get those findings to be as positive, novel, and tidy as I can so that you, the reviewer, say, OK, OK, you can publish it.
So in the standard publishing model, I do all of my research. I get my findings. I write it up in a paper and I send it to the journal. In that model, the reward system is about the findings. I need to get those findings to be as positive, novel, and tidy as I can so that you, the reviewer, say, OK, OK, you can publish it.
That's dysfunctional and it leads to all of those practices that might lead the claims to be more exaggerated than the evidence.
That's dysfunctional and it leads to all of those practices that might lead the claims to be more exaggerated than the evidence.
The registered report model says to the journal, you are going to submit, Brian, the methodology that you're thinking about doing and why you're asking that question and the background research supporting that question being important and that methodology being effective methodology. We'll review that. We don't know what the results are. You don't know what the results are.
The registered report model says to the journal, you are going to submit, Brian, the methodology that you're thinking about doing and why you're asking that question and the background research supporting that question being important and that methodology being effective methodology. We'll review that. We don't know what the results are. You don't know what the results are.
But we're going to review based on, do you have an important question?
But we're going to review based on, do you have an important question?
Exactly. And the key part is that the reward, me getting that publication, is based on you agreeing that I'm asking an important question and I've designed an effective method to test it. It's no longer about the results. None of us know what the results are.
Exactly. And the key part is that the reward, me getting that publication, is based on you agreeing that I'm asking an important question and I've designed an effective method to test it. It's no longer about the results. None of us know what the results are.
Yeah, so the commitment that the journal makes is we're going to publish it regardless of outcome, and the authors are making that commitment too. We're going to carry this out as we said we would, and we'll report what happens.
Yeah, so the commitment that the journal makes is we're going to publish it regardless of outcome, and the authors are making that commitment too. We're going to carry this out as we said we would, and we'll report what happens.
Now, an interesting thing happens in the change of the culture here in evaluating research because you said, well, if it's an uninteresting finding, do we still have to publish it? It turns out that when you have to make a decision of whether to publish or not before knowing that the results are –
Now, an interesting thing happens in the change of the culture here in evaluating research because you said, well, if it's an uninteresting finding, do we still have to publish it? It turns out that when you have to make a decision of whether to publish or not before knowing that the results are –
The orientation that the reviewers bring, that the authors bring, is do we need to know the answer to this? Regardless of what happens, do we need to know the answer? Is the question important, in other words? Exactly. Is the question important enough that we need evidence, regardless of what the evidence is? And it dramatically shifts what ends up being published.
The orientation that the reviewers bring, that the authors bring, is do we need to know the answer to this? Regardless of what happens, do we need to know the answer? Is the question important, in other words? Exactly. Is the question important enough that we need evidence, regardless of what the evidence is? And it dramatically shifts what ends up being published.
So in the early evidence with registered reports, more than half of the hypotheses that are proposed end up not being supported in the final paper. In the standard literature, comparable type of domains, more than 95% of the hypotheses are supported in the paper. You wonder in the standard literature, if we're always right, why do we bother doing the research, right?
So in the early evidence with registered reports, more than half of the hypotheses that are proposed end up not being supported in the final paper. In the standard literature, comparable type of domains, more than 95% of the hypotheses are supported in the paper. You wonder in the standard literature, if we're always right, why do we bother doing the research, right?
Our hypotheses are always right. And of course, it's laughable because we know that's not what's actually happening. We know that all that failed stuff is getting left out and we're not seeing it. And the actual literature is an exaggeration of what the real literature is.
Our hypotheses are always right. And of course, it's laughable because we know that's not what's actually happening. We know that all that failed stuff is getting left out and we're not seeing it. And the actual literature is an exaggeration of what the real literature is.
I think there is broad buy-in on the need to change, and it has already hit the mainstream of many of the changes that we promote, sharing data, materials, code, pre-registering research, reporting all outcomes. So we're in the scaling phase for those activities, and what I am optimistic about is that we
I think there is broad buy-in on the need to change, and it has already hit the mainstream of many of the changes that we promote, sharing data, materials, code, pre-registering research, reporting all outcomes. So we're in the scaling phase for those activities, and what I am optimistic about is that we
There is this meta-science community that is interrogating whether these solutions are actually having the desired impact. And so this is the most exciting part of the movement as I'm looking to the future is this dialogue between activism and reform. We can do these things. Let's make these changes. And meta-science and evaluation. Is this working? Did you do what you said it was going to do?
There is this meta-science community that is interrogating whether these solutions are actually having the desired impact. And so this is the most exciting part of the movement as I'm looking to the future is this dialogue between activism and reform. We can do these things. Let's make these changes. And meta-science and evaluation. Is this working? Did you do what you said it was going to do?
And et cetera. And I hope that the tightness of that loop will stay tight because that, I think, will make for a very healthy discipline that is constantly skeptical of itself and constantly looking to do better.
And et cetera. And I hope that the tightness of that loop will stay tight because that, I think, will make for a very healthy discipline that is constantly skeptical of itself and constantly looking to do better.
There really is accelerating movement in the sense that some of the base principles of we need to be more transparent, we need to improve data sharing, we need to facilitate the processes of self-correction are not just head nods, yeah, that's an important thing, but have really moved into, yeah, how are we going to do that?
There really is accelerating movement in the sense that some of the base principles of we need to be more transparent, we need to improve data sharing, we need to facilitate the processes of self-correction are not just head nods, yeah, that's an important thing, but have really moved into, yeah, how are we going to do that?
And so I guess that's been the theme of 2024 is how can we help people do it well?
And so I guess that's been the theme of 2024 is how can we help people do it well?
One of the more exciting things that we've been working on is a new initiative that we're calling Lifecycle Journal. And the basic idea is to reimagine scholarly publishing without the original constraints of paper. A lot of how the peer review process and publishing occurs today was done because of the limits of paper.
One of the more exciting things that we've been working on is a new initiative that we're calling Lifecycle Journal. And the basic idea is to reimagine scholarly publishing without the original constraints of paper. A lot of how the peer review process and publishing occurs today was done because of the limits of paper.
But in a world where we can actually communicate digitally, there's no reason that we need to wait till the research is done to provide some evaluation. There's no reason to consider it final when it could be easily revised and updated. There's no reason to think of review as a singular one set of activities that by three people who judge the entire thing.
But in a world where we can actually communicate digitally, there's no reason that we need to wait till the research is done to provide some evaluation. There's no reason to consider it final when it could be easily revised and updated. There's no reason to think of review as a singular one set of activities that by three people who judge the entire thing.
And so we will have a full marketplace of evaluation services that are each evaluating the research in different ways. It'll happen across the research lifecycle from planning through completion. And researchers will always be able to update and revise when errors or corrections are needed.
And so we will have a full marketplace of evaluation services that are each evaluating the research in different ways. It'll happen across the research lifecycle from planning through completion. And researchers will always be able to update and revise when errors or corrections are needed.
My whole life is about trying to promote transparent research practices, greater openness, trying to improve rigor and reproducibility. I am just as vulnerable to error as anybody else. And so one of the real lessons, I think, is that without transparency, these errors will go unexposed.
My whole life is about trying to promote transparent research practices, greater openness, trying to improve rigor and reproducibility. I am just as vulnerable to error as anybody else. And so one of the real lessons, I think, is that without transparency, these errors will go unexposed.
It would have been very hard for the critics to identify that we had screwed this up without being able to access the portions of the materials that we were able to make public. And as people are engaged with critique and pursuing transparency – and transparency is becoming more normal –
It would have been very hard for the critics to identify that we had screwed this up without being able to access the portions of the materials that we were able to make public. And as people are engaged with critique and pursuing transparency – and transparency is becoming more normal –
We might, for a while, see an ironic effect, which is transparency seems to be associated with poorer research because more errors are identified. And that ought to happen because errors are occurring. Without transparency, you can't possibly catch them. But what might emerge over time as our verification processes improve, as we have a sense of accountability to our transparency,
We might, for a while, see an ironic effect, which is transparency seems to be associated with poorer research because more errors are identified. And that ought to happen because errors are occurring. Without transparency, you can't possibly catch them. But what might emerge over time as our verification processes improve, as we have a sense of accountability to our transparency,
then the fact that transparency is there may decrease error over time, but not the need to check. And that's the key.
then the fact that transparency is there may decrease error over time, but not the need to check. And that's the key.
This is a real challenge that we wrestle with and have wrestled with since the origins of the center is how do we promote this culture of critique and self-criticism about our field and and simultaneously have that be understood as the strength of research rather than its weakness.
This is a real challenge that we wrestle with and have wrestled with since the origins of the center is how do we promote this culture of critique and self-criticism about our field and and simultaneously have that be understood as the strength of research rather than its weakness.
One of the phrases that I've liked to use in this is that the reason to trust science is because it doesn't trust itself. That part of what makes science great as a social system is its constant self-scrutiny and willingness to try to find and expose its errors So that the evidence that comes out at the end is the most robust, reliable, valid evidence as could be.
One of the phrases that I've liked to use in this is that the reason to trust science is because it doesn't trust itself. That part of what makes science great as a social system is its constant self-scrutiny and willingness to try to find and expose its errors So that the evidence that comes out at the end is the most robust, reliable, valid evidence as could be.
And that continuous process is the best process in the world that we've ever invented for knowledge production. We can do better. I think our mistake in some prior efforts of promoting science is to appeal to authority, saying you should trust science because scientists know what they're doing. I don't think that's the way to gain trust in science because anyone can make that claim.
And that continuous process is the best process in the world that we've ever invented for knowledge production. We can do better. I think our mistake in some prior efforts of promoting science is to appeal to authority, saying you should trust science because scientists know what they're doing. I don't think that's the way to gain trust in science because anyone can make that claim.
Appeals to authority are very weak arguments. I think our opportunity as a field to address the skepticism of institutions generally and science specifically is is to show our work, is by being transparent, by allowing the criticism to occur, by in fact encouraging and promoting critical engagement with our evidence.
Appeals to authority are very weak arguments. I think our opportunity as a field to address the skepticism of institutions generally and science specifically is is to show our work, is by being transparent, by allowing the criticism to occur, by in fact encouraging and promoting critical engagement with our evidence.
That is the playing field I'd much rather be on with people who are the so-called enemies of science than in competing appeals to authority. Because if they need to wrestle with the evidence and an observer says, wow, one group is totally avoiding the evidence and the other group is actually showing their work, I think people will know who to trust. That's easy to say. It's very hard to do.
That is the playing field I'd much rather be on with people who are the so-called enemies of science than in competing appeals to authority. Because if they need to wrestle with the evidence and an observer says, wow, one group is totally avoiding the evidence and the other group is actually showing their work, I think people will know who to trust. That's easy to say. It's very hard to do.
These are hard problems.
These are hard problems.
We are a carrot-based organization because we don't have sticks. I mean, would you like me to loan you a stick just once in a while? Yeah, that would be fun.
We are a carrot-based organization because we don't have sticks. I mean, would you like me to loan you a stick just once in a while? Yeah, that would be fun.
We feel like when we're in cultures that there is no way for any of us to change the culture. It's a culture. My God, how could we change it? But we also recognize that cultures are created by the people that comprise them. And the notion that we collectively can actually do something to shift the research culture, I think, has spread.
And that spreading has actually accelerated the change of the research culture for the better.
Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication, I have a very strong sense of, oh my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
I asked Nosik how he thinks this culture can be changed. So, for example, we have to make it easy for researchers to be more transparent with their work. If it's really hard to share your data, then adding on that extra work is going to slow down my progress. We have to make it normative. People have to be able to see that others in their community are doing this. They're being more transparent.
They're being more rigorous so that instead of us saying, oh, that's great ideals and nobody does it, you say, oh, there's somebody over there that's doing it. Oh, maybe I could do it too. We have to deal with the incentives. Is it actually relevant for my advancement in my career to be transparent, to be rigorous, to be reproducible? And then we have to address the policy framework.
If it's not embedded in how it is that funders decide who to fund, institutions decide who to hire, journals to decide what to publish, then it's not going to be internally and completely embedded in the system.
Yeah, so the idea is you register your designs and you've made that commitment in advance. And then as you're carrying out the research, if things change along the way, which happens all the time, you can update that registration. You can say, here's what's changing. We didn't anticipate that going into this community was going to be so hard and here's how we had to adapt. That's fine.
You should be able to change. You just have to be transparent about those changes so that the reader can evaluate. And then those data are timestamped together? Exactly. Yeah. You put your data and your materials. If you did a survey, you add the surveys. If you did behavioral tasks, you can add those.
So all of that stuff can be attached then to the registration so that you have a more comprehensive record of what it is you did.
It makes fraud more inconvenient. And that's actually a reasonable intervention. I don't think any intervention that we could design could prevent fraud in a way that doesn't stifle actual legitimate research. We just want to make visible all the things that legitimate researchers are doing so that someone that doesn't want to do that extra work has a harder time.
And eventually, if everything is exposed, then the person who would be motivated to do fraud might say, well, it's just as easy to do the research the real way. So I guess I'll do that.
So in the standard publishing model, I do all of my research. I get my findings. I write it up in a paper and I send it to the journal. In that model, the reward system is about the findings. I need to get those findings to be as positive, novel, and tidy as I can so that you, the reviewer, say, OK, OK, you can publish it.
That's dysfunctional and it leads to all of those practices that might lead the claims to be more exaggerated than the evidence.
The registered report model says to the journal, you are going to submit, Brian, the methodology that you're thinking about doing and why you're asking that question and the background research supporting that question being important and that methodology being effective methodology. We'll review that. We don't know what the results are. You don't know what the results are.
But we're going to review based on, do you have an important question?
Exactly. And the key part is that the reward, me getting that publication, is based on you agreeing that I'm asking an important question and I've designed an effective method to test it. It's no longer about the results. None of us know what the results are.
Yeah, so the commitment that the journal makes is we're going to publish it regardless of outcome, and the authors are making that commitment too. We're going to carry this out as we said we would, and we'll report what happens.
Now, an interesting thing happens in the change of the culture here in evaluating research because you said, well, if it's an uninteresting finding, do we still have to publish it? It turns out that when you have to make a decision of whether to publish or not before knowing that the results are –
The orientation that the reviewers bring, that the authors bring, is do we need to know the answer to this? Regardless of what happens, do we need to know the answer? Is the question important, in other words? Exactly. Is the question important enough that we need evidence, regardless of what the evidence is? And it dramatically shifts what ends up being published.
So in the early evidence with registered reports, more than half of the hypotheses that are proposed end up not being supported in the final paper. In the standard literature, comparable type of domains, more than 95% of the hypotheses are supported in the paper. You wonder in the standard literature, if we're always right, why do we bother doing the research, right?
Our hypotheses are always right. And of course, it's laughable because we know that's not what's actually happening. We know that all that failed stuff is getting left out and we're not seeing it. And the actual literature is an exaggeration of what the real literature is.
I think there is broad buy-in on the need to change, and it has already hit the mainstream of many of the changes that we promote, sharing data, materials, code, pre-registering research, reporting all outcomes. So we're in the scaling phase for those activities, and what I am optimistic about is that we
There is this meta-science community that is interrogating whether these solutions are actually having the desired impact. And so this is the most exciting part of the movement as I'm looking to the future is this dialogue between activism and reform. We can do these things. Let's make these changes. And meta-science and evaluation. Is this working? Did you do what you said it was going to do?
And et cetera. And I hope that the tightness of that loop will stay tight because that, I think, will make for a very healthy discipline that is constantly skeptical of itself and constantly looking to do better.
There really is accelerating movement in the sense that some of the base principles of we need to be more transparent, we need to improve data sharing, we need to facilitate the processes of self-correction are not just head nods, yeah, that's an important thing, but have really moved into, yeah, how are we going to do that?
And so I guess that's been the theme of 2024 is how can we help people do it well?
One of the more exciting things that we've been working on is a new initiative that we're calling Lifecycle Journal. And the basic idea is to reimagine scholarly publishing without the original constraints of paper. A lot of how the peer review process and publishing occurs today was done because of the limits of paper.
But in a world where we can actually communicate digitally, there's no reason that we need to wait till the research is done to provide some evaluation. There's no reason to consider it final when it could be easily revised and updated. There's no reason to think of review as a singular one set of activities that by three people who judge the entire thing.
And so we will have a full marketplace of evaluation services that are each evaluating the research in different ways. It'll happen across the research lifecycle from planning through completion. And researchers will always be able to update and revise when errors or corrections are needed.
My whole life is about trying to promote transparent research practices, greater openness, trying to improve rigor and reproducibility. I am just as vulnerable to error as anybody else. And so one of the real lessons, I think, is that without transparency, these errors will go unexposed.
It would have been very hard for the critics to identify that we had screwed this up without being able to access the portions of the materials that we were able to make public. And as people are engaged with critique and pursuing transparency – and transparency is becoming more normal –
We might, for a while, see an ironic effect, which is transparency seems to be associated with poorer research because more errors are identified. And that ought to happen because errors are occurring. Without transparency, you can't possibly catch them. But what might emerge over time as our verification processes improve, as we have a sense of accountability to our transparency,
then the fact that transparency is there may decrease error over time, but not the need to check. And that's the key.
This is a real challenge that we wrestle with and have wrestled with since the origins of the center is how do we promote this culture of critique and self-criticism about our field and and simultaneously have that be understood as the strength of research rather than its weakness.
One of the phrases that I've liked to use in this is that the reason to trust science is because it doesn't trust itself. That part of what makes science great as a social system is its constant self-scrutiny and willingness to try to find and expose its errors So that the evidence that comes out at the end is the most robust, reliable, valid evidence as could be.
And that continuous process is the best process in the world that we've ever invented for knowledge production. We can do better. I think our mistake in some prior efforts of promoting science is to appeal to authority, saying you should trust science because scientists know what they're doing. I don't think that's the way to gain trust in science because anyone can make that claim.
Appeals to authority are very weak arguments. I think our opportunity as a field to address the skepticism of institutions generally and science specifically is is to show our work, is by being transparent, by allowing the criticism to occur, by in fact encouraging and promoting critical engagement with our evidence.
That is the playing field I'd much rather be on with people who are the so-called enemies of science than in competing appeals to authority. Because if they need to wrestle with the evidence and an observer says, wow, one group is totally avoiding the evidence and the other group is actually showing their work, I think people will know who to trust. That's easy to say. It's very hard to do.
These are hard problems.
We are a carrot-based organization because we don't have sticks. I mean, would you like me to loan you a stick just once in a while? Yeah, that would be fun.
We can't say with any confidence where it's most prominent. We can only say that the incentives for doing it are everywhere. And some of them gain more attention because, for example, Francesca's findings are interesting. They're interesting to everyone. So of course, they're going to get some attention to that. Whereas the anesthesiologist's findings are not interesting. They put people to sleep.
Until they kill you. Well, yeah, I guess they put you, and then they kill you.
Yeah, I would say that there is a lot of attention to social psychology for two reasons. One is that it has public engagement interest value. That's a way of saying that people care about your findings. people care about, at least in the sense of, oh, that's interesting to learn, right? But the other reason is that social psychology has bothered to look.
And I think social psychology became a hotbed for this because the actual challenges in the social system of science that need to be addressed are social psychological problems. What do you mean by that? I mean like the reward system, how it is that people might rationalize and use motivated reasoning to come to findings that are less credible.
A lot of these problems are ones that social psychologists spend every day thinking about.
The case that people make to say that this is a bigger problem now is that the competition for attention, jobs, advancement is very high, perhaps greater than it's ever been.
Yeah. So there are many more people and many fewer positions, which is an obvious challenge for a competitive marketplace. And there are now pathways for public attention that have even bigger impact. Academics, by and large, didn't think about ways to get rich. They looked for ways to have time to think about the problems that they want to think about. But now they have pathways to get rich.
Fraud has existed since science has existed. And that's primarily because humans are doing the science. And people come with ideas, beliefs, motivations, reasons that they're doing the research that they do. And in some cases, people are so motivated to advance an idea or themselves that they are willing to change the evidence fraudulently to advance that idea or themselves.
Our funders include NIH, NSF, NASA, and DARPA as federal sources, and then a variety of private sources such as the John Templeton Foundation, Arnold Foundation, and many others. And that diverse group of funders – and it's quite diverse – I think share the recognition that the substantive things that they are trying to solve –
won't be solved very effectively if the work itself is not done credibly.
There are specific cases where a finding gets translated into public policy or into some type of activity that then ends up actually damaging people, lives, treatments, solutions. One of the most prominent examples is the Wakefield scandal relating to development of autism and the notion that vaccines might contribute to
And that has had an incredibly corrosive impact on public health, on people's beliefs about the sources of autism, the impacts of vaccines, et cetera. And that is very costly for the world. There's also a local cost in academic research, which is just a ton of waste.
So even if it doesn't have public downstream consequences, if a false idea is in the literature and other people are trying to build on it, it's just waste, waste, waste, waste.
Yeah, I have always had an interest in how to do good science as a principled matter. And in doing that, we in the lab would work on developing tools and resources to be more transparent with our work, to try to be more rigorous with our work, to try to do higher powered research. more sensitive research designs.
And so I wrote grant applications to say, can we make a repository where people can share their data? This is like 2007. And they would get polarized reviews where some reviewers would say, this would change everything. It'd be so useful to be more transparent with our work. And others saying, but researchers don't like sharing their data. Why would we do that?
And why would researchers not want to share their data? Yeah, it's based on the academic reward system. Publication is the currency of advancement. I need publications to have a career, to advance my career, to get promoted. And so the work that I do that leads to publication
I have a very strong sense of, oh, my gosh, if others now have control of this, my ideas, my data, my designs, my solutions, then I will disadvantage my career.
Yeah. And here's the irony is that almost every academic would say, of course, science is supposed to be transparent. Of course, we're doing research for the public good. Of course, this is all to be shared. But come on, Pollyanna, we live in a world, right? The reality here is that there is a reward system and I have to have a career in order to do that research.
And so, yes, we can talk all about those ideals of transparency and sharing and rigor, reproducibility. But if they're not part of the reward system, you're asking me to either behave by my ideals and not have a career or have a career and sacrifice some of those ideals.
And in the end of that, 2015, we published The Findings, which was a 270 co-author paper of 100 replications of findings from three different journals in psychology. We got a little less than half of the findings successfully replicated.
Little less than half of the findings successfully replicated.
A year and a half ago, we published the results of the reproducibility project in cancer biology doing the same kind of process and found very similar results. Less than half of the findings in preclinical cancer research replicated successfully when we tried to do so.
That doesn't mean that the original finding is necessarily wrong. We could have screwed something up in the replication. Successfully replicating doesn't mean the interpretation is right. It could be that both the findings have a confound in them, but we just are able to repeat the confound.
Fraud is the ultimate corrosive element of the system of science because as much as transparency provides some replacement for trust – You can't be transparent about everything. So the ideal model in scholarship is that you can see how it is they generated their evidence, how they interpreted their evidence, what the evidence actually is, and then independent people can interrogate that.
And so to the extent that fraud intrudes and actually the evidence isn't It isn't actual evidence. Then the whole edifice of that scholarly debate and tangling with ideas falls apart because you're actually tangling with ideas that aren't based on anything. How familiar are you with the Joachim Bolt situation? I'm not recalling that name, but I may know the case if you describe it.