IRBs

When should you enforce standards by requiring permission, and when should you enforce them by punishment?

I’m glad aircraft have to be pre-approved by the FAA before I can fly in them, rather than letting Boeing release whatever planes they want and recalling any that end up not working. And on the other hand, we don’t expect the FDA to pre-approve every new culinary delight the food industry graces us with.

But of all society’s pre-approval vs. post-monitoring decisions, the most consequential to science was the creation of the Institutional Review Board.

An Institutional what?

An IRB is a group of people that decides whether a research project is ethical. In the U.S. and most other countries, it’s practically impossible to do research involving humans unless an IRB (or equivalent body) pre-approves your research plan.

“Involving humans” really is as general as it sounds: from surgeries to surveys, if any information about any human is being collected, an IRB approved it first.

IRBs are a close 2nd to paywalled journals in the rankings of things scientists complain about most. This isn’t because scientists want to do unethical research. It’s because IRBs are slow and frustrating.

How bad can it be?

Anecdotes abound. Scott Alexander’s IRB Nightmare is the best known, in which our hero tries for over a year and eventually fails to get permission to ask patients questions they’re already being asked. (Follow-up discussion here.) Andrew Gelman has some good ones, and you can find endless more on reddit. The moral monsters at Give Directly had to wait a year when they once tried to give people free money.

A professor I know spent 3 years (years!) getting approval to try to wake a coma patient by applying ultrasound at already-FDA-approved intensities to the patient’s head. It worked, by the way, though the real miracle may be that the patient survived the 3-year wait.

There’s also no shortage of legal and moral critiques of IRBs.

But the worst stories always get the most attention. And every regulation has haters. How bad are IRBs for science really, overall?

Quite bad, is the answer.

In the most comprehensive survey of 11,167 researchers with active federal grants in 2018, ~66% of researchers reported that petitioning IRBs was a “substantial” part of their workload. Unfortunately the study didn’t ask for specific time estimates. But researchers also reported spending ~44% of their research time on administrative tasks other than active research, including petitioning IRBs. And a substantial amount of 70% of 44% is still substantial.

More directly, ~33% of researchers said improving IRBs was a high priority for reducing unnecessary administrative burden.

One can see why. At Berkeley, getting permission to email out a social science survey takes at least 3 to 4 weeks, while “expedited” review takes 6-8 weeks. The University of Illinois at Chicago reports a median of 70 days for full IRB review and 33 days for expedited review. The University of Arkansas for Medical Sciences claims to be quicker, reporting an impressive median of 30 days for full review and 5 days to confirm “exempt” studies. Less impressively, a VA hospital reports a median of 131 days for full review and 82 days for exempt review. Weirdly, a regional IRB in the UK reports taking 30 days to do expedited reviews but only 25 days to do full reviews.

The financial costs of IRBs aren’t as outrageous, though of course they’re not free. 2003 estimate were between ~$70,000/yr and ~$770,000/yr depending on the institution size. A 2007 estimate of that range was from ~$100k/yr to ~$4.5M/yr.

More worrying is how wildly inconsistent IRB rulings are between institutions. This inconsistency means researchers at institutions with more permissive IRBs often get to publish first, an unfairness that has not gone unnoticed nor unexploited by researchers. IRBs also often mandate changes that drastically increase the costs of studies, like requiring researchers to hire a doctor to be present for trivial medical procedures. And IRBs make it challenging for scientists at different institutions to collaborate, since e.g. for medical device studies if any one institution’s IRBs does not approve a study, the study has to go all the way to the FDA to get approval.

Sadly, comprehensive data on IRB performance isn’t collected, so we don’t have good estimates for the two most important quantities: how many worthwhile studies are killed by IRBs, and how many unethical studies IRBs have stopped or fixed. We know the first quantity is nonzero. The second quantity may well be zero by the usual prohibition logic: people who want to do unethical research just do it anyway, clandestinely or overseas where it’s legal.

Of course scientists don’t just sit idle while waiting for IRB approval. They can do other things while they wait. And the data suggest that patient recruitment and fundraising consume more researcher time than interacting with the IRB.

But the fundamental problem with IRBs is not the time they take. It’s that the more novel an experiment, the less likely an IRB is to allow it, which is precisely the opposite of what we want to incentivize in science. Novelty is not inherently unethical. It’s only unethical if the benefits don’t out weight the risks. And when it comes to trying truly new, high-risk-high-reward technologies in the clinic, IRBs are by far the biggest bottleneck.

How did we get here?

There’s no question that the creation of IRBs was well-meaning.

For a complete history you can read this book. But in brief: the 20th century was full of awful, unethical science. The worst crimes were (and probably still are) committed by totalitarian regimes like the Nazis, but the U.S. government was far from innocent.

In particular, in 1932 the U.S. Public Health Service (PHS) and the CDC started the infamous Tuskegee Syphilis Study. The Tuskegee study wasn’t remotely as bad as other studies the PHS ran, with which it is often conflated. But it was certainly unethical, and it continued for 40 years until PHS employee Peter Buxton blew the whistle on it in 1972.

National outrage at Tuskegee led congress to pass the National Research Act in 1974, which said, “Make sure nothing like Tuskegee happens again.” There followed a series of commissions and reports to figure out how to accomplish this. And the reports’ main inspiration was the NIH.

Why the NIH? Flashback to the 1950s when, unrelated to Tuskegee, doctors at the NIH had started trying a newfangled idea: doing research on healthy people instead of just sick people. This was great for science, but the doctors doing the research were nervous about bad PR and lawsuits, since they were only used to treating patients and not “Normals,” as they called them. So in 1953 the NIH assembled a few senior doctors into a “Clinical Research Committee” to review (or rubber-stamp) all research on Normals. This was the first proto-IRB.

By 1962 the NIH had also started requiring signed consent forms from all research participants.1 NIH doctors hated this rule and mostly ignored it, since they thought the whole point of having a professional code of ethics was to avoid that stuff. But NIH lawyers loved it because it helped them defend lawsuits.

Then the NIH started getting sued not just for their own research but for research they funded at other institutions. So in 1966 the NIH mandated that their grantees all use proto-IRBs and consent forms too. This especially bummed psychologists, since it limited the use of deception in research, which at least for me would have been the main perk of being a psychologist. (The Milgram experiment was done in 1961, if you’re wondering.)

So, returning to the Tuskegee side of the story: in 1974 when the government was looking for ways to comply with the National Research Act, they saw that NIH had been doing ethical-looking things for decades, and more-or-less copy-pasted the NIH’s system into official regulations with the memorable name 45 CFR part 46. These regulations were so good that by 1991 15 other agencies2 had copy-pasted HHS’s regulations into their own regulations. Since then, everyone calls these regulations the Common Rule, since they’re common to nearly every government agency. And the centerpiece of the Common Rule is the requirement that all research be approved by an IRB.3

Thus the modern research regulatory environment was born. (At least in the U.S., though most of the world has copied the system too.)


Reminder: How the U.S. government decides what you’re allowed to do

  • Congress makes laws, a.k.a. statutes, a.k.a. acts, a.k.a. legislation.
  • Government agencies (FDA, DEA, EPA, etc.) make regulations, a.k.a. rules.
  • Regulations aren’t laws, but they have the “force of law.”
  • The courts get to decide whether laws and/or regulations contradict each other.
    • The Constitution takes precedence over laws, and laws take precedence over regulations.
  • There are also state and local versions of all the above.

Does this affect my research?

The Common Rule says you have to use an IRB if your research is supported in any way by HHS or another Common Rule agency. The Office of Human Research Protections in HHS is the group that actually enforces all this.

“Well then,” you might say, “I’ll just get private funding for all my research, and then I don’t have to use an IRB.” That’s absolutely true, and a lot of research is done that way, like political polls (though not the U.S. Census, because that’s funded by the Department of Commerce, a Common Rule agency).

Except that IRBs are also required by basically any agency if you ever want to submit the results of your research to them. For example, if you’re a biotech company, and you’d like to get a drug approved by the FDA, which you have to do in order to market it, all the research you do to prove the drug is safe and effective has to have been IRB-approved.

And even if you don’t care about the FDA, if you want to publish your research in any serious journal you’ll find they require IRB approval, too. Oh, and did you want money to do your research? Because many grantmakers have IRBs — apparently just for PR reasons, since I can’t find a regulation requiring this.

And even if you just want to throw your research on your blog, most institutions mandate that all work done with their money or equipment go through their IRB. Don’t need their equipment? Most federally funded datasets require you to have an IRB to use them, too.

Oh, and in case you were hoping you could at least do some self-experimentation without IRB approval, you can’t at most institutions, even if it wins you a Nobel Prize. You’ll have to do it in your free time.

Can we fix this?

Obviously the IRB mandate could be removed by regulation or legislation, but I’m not holding my breath.

Could we make a new IRB that works better and convince researchers to use it instead? That’s already been done. Private, for-profit IRBs exist, costing a few thousand dollars for review with a week-or-so turnaround. These private IRBs are how startups and other small organizations get the required IRB clearance described in the previous section. But most universities don’t allow researchers to use private IRBs in lieu of the university’s IRB, so private IRBs can’t save us.

What about going overseas? Most of the rest of the world uses an IRB-like system, but not everywhere. There are parts of the world where you can do basically whatever research you want, for better or worse. This isn’t a full solution for the same reason private IRBs aren’t. But some researchers have developed a clever strategy that could be scaled up: they do preliminary, IRB-overhead-free studies overseas and then re-run the successful ones in the U.S. There may be an outsourcing solution here.

Could we at least help researchers navigate the IRB system better? Yes. Experienced researchers learn lots of IRB tricks that aren’t taught to students. For example, you can get a very generic IRB approval – called an “umbrella” IRB – and smuggle as much research under its remit as possible. Cover yourself with enough umbrellas, and you never have to go to the IRB again. Training scientists to navigate a broken system is a depressing solution, but it’s practical.

None of these are great solutions though. If you have a better idea, please get in touch.


Have feedback? Find a mistake? Please let me know!


  1. This was mainly the FDA’s influence. The FDA also caused a massive increase in the use of prisoners in research by requiring increased safety testing in Normals before approving treatments. Prisoners were the only place the NIH could find enough bodies to keep up with the FDA’s demands. ↩︎

  2. The CIA didn’t want to join in, probably because they were having fun doing stuff like MKUltra, but eventually Reagan forced them to in 1981, along with telling them to stop assassinating people↩︎

  3. There’s more to the Common Rule than just IRBs. For instance, it also mandates that research participants give informed consent. And there’s more to IRBs than just the Common Rule, since agencies can have their own additional regulations. The FDA for example has lots of extra rules about IRBs for clinical trial research↩︎