If we knew what we were doing, it wouldn’t be called research, would it? — Albert Einstein
As
One of the main reasons why research oversight is necessary is because it is not always possible for everyone interested in research to understand or consider the possible risks and harms that can and do occur as a consequence of the research. While in most cases, investigators have good intentions and great hearts, let’s face it: it’s not easy to anticipate all the potential risks. I don’t have enough fingers to count how many times I thought I had a great idea, only later to find out it wasn’t such a great idea.
When consulting with researchers, I like to ask them, “if you were the participant, what kind of questions would you ask about the research before signing up? Would you be comfortable yourself, or someone in your family becoming a research participant with all aspects exactly as they are at this point?”
The need for human subject research protections was brought to light over time, but a few studies from the past that really stand out are the Nazi studies (a.k.a. the Nuremberg Trials, where someone thought it would be a great idea to try out a bunch of medical experiments on unwilling and uninterested concentration camp prisoners), and the Tuskegee Study (where someone thought it was perfectly reasonable to lie about treating syphilis and thus prevent the participants from actually getting treatment, all “for greater research purposes”).
While we could assume the worst, we could also assume the best of intentions, but in the end, people still get hurt. This is why, after several experiments in which too many people were getting hurt and killed (intentional or not), people finally caught on and thought it best to establish some basic research ethics.
A National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was created and made a code of research ethics, issuing the “Belmont Report” which is what we pretty much base majority of our regs off today and how the U.S. Department of Health and Human Services (HHS) developed their Code of Federal Regulations (45 CFR 46). Since then, 16 federal agencies have also adopted this. We call it the “Common Rule“.
The Belmont Report contains three basic ethical principles:
- Respect for Persons
- Beneficence
- Justice
Respect for Persons in a nut shell says all people have the right to make decisions for themselves (i.e., they’re autonomous), and that if they are unable to make decisions for themselves (kids, for example) then they must have extra protections. This means that they also must have plenty of information for them to make those decisions. Without enough information, we may end up signing up for something we would not have otherwise.
In other words, participants should understand what the study is about, feel no pressure to say yes or no, and if they do join, they should feel free to quit at any point without consequence. This is where potential for undue influence become a big deal. Let’s say a boss wants to do a study on her subordinates. They may feel afraid of saying “no” at risk of getting on the boss’ bad side.
Beneficence: This one I always confuse with “benefits” which it isn’t. In research, beneficence means (1) don’t harm anyone, and (2) make sure you are maximizing any possible benefits while minimizing all possible harms. In Social Behavioral Research this can be hard because it’s extremely rare that an individual gets any personal benefit out of their participation. This is why it is important to be able to anticipate various risks- so that you can outweigh them with benefits, or at minimum, do your due diligence and protect them from harm, such as any potential for their data being lost.
Justice: Another confusing term. Justice in the Belmont Report builds off of the last two, by identifying how we select participants. I.e., who is ultimately benefiting from this? The Tuskegee study is a great example of injustice. The participants were all black. We call this “inequitable selection”, which basically means you can’t target certain groups if the benefits can be generalized to other groups. Not only did the researchers limit their study to black men, but that meant that in the course of the research, it was only those black men that shouldered the burden and risks from the study while the larger population benefited. The Nuremburg trials as well, serve as a great example. Would the researchers mind serving as research participants in their studies to the extent their selected participants were? Had the tables been turned, it is highly likely that the researchers would not have wanted to be in either of those studies as a participant.
In summary, research protections are not in place as bureaucratic nonsense or stupid rules made up to slow down f important research from getting done. Protections are in place because throughout history, we’ve grown. We’ve learned a lot and in the course of making mistakes, we have found ways to avoid making them in the future- with basic guiding principles.
Fortunately, not every study falls into the same cup. Research is not black and white, and regulations also provide flexibility. By working closely with your IRB, from planning to implementation, you can be confident that you’re getting what you need done in the most practical, efficient, ethical, and compliant way possible. And if that’s not a big enough incentive, it also keeps you funded.