Artificial Intelligence Human Subjects Research (AI HSR) IRB Reviewer Checklist (with AI HSR and Exempt Decision Tree)

IRBs tread lightly when it comes to the oversight of AI human subject research (AI HSR). This may be due to insufficient understanding of when AI research involves human subjects. It may also be in fear of committing scope creep (who’s role is it to ensure responsible and ethical AI in human subjects research?). Admirably, in response, some have proposed the establishment of commercial AI Ethics Committees, while others try to fit AI ethics review into an ancillary review process. Ancillary AI ethics committees either take on the look and feel of a scientific review committee or treat the process like an IBC or SCRO committee. I argue that IRBs can (and should) fit AI HSR within their current IRB framework in many significant and meaningful ways without committing scope creep.

Admittedly, the current framework has limitations, regardless of if it is AI HSR or any other type of research. However, moving AI HSR oversight to an ancillary committee is not an efficient solution for researchers who will still have to navigate their way through the IRB for their projects in addition to these extra bureaucratic hoops. Ancillary AI HSR committees only delay the process to approval and disincentivize compliance. Rather than build a new AI HSR IRB or ancillary review committee, we need to provide and require the AI HSR education/training of IRB administration and remind the IRB of its duty to ensure a relevant experts sit on the Board when reviewing specific research.

While it may be ideal for institutions with no IRB to outsource their reviews, for institutions with a home IRB, there are multiple downsides to outsourcing AIHSR oversight. Below are a few that come to mind:

1)    Cost: The study team may need to plan for additional funding if the review isn’t free (i.e., when it isn’t done in-house). Additional reviews for modifications or annual renewals may be required, which would add to that cost.

2)    Duplication of Effort: An AI Research Review Committee (AIRC) typically acts as an ancillary review to IRB review. However, many if not all of the issues reviewed would parallel IRB review and cause duplication of effort, time and money.

3)    No binding regulatory power: If an AIRC (or any AI ancillary review) has recommended changes to the protocol, the committee likely won’t have any regulatory “teeth”. This means that the researchers will not be required or inclined to comply with their “suggestions”. Additionally, these suggestions may or may not make their way to the IRB unless there is infrastructure established that keeps the two committees “talking to each other”. 

4)    Sustainability: Need to develop a sustainable administrative process for the committee in regard to.

The key to AI HSR ethical review and research compliance oversight is the need to focus on the data. AI/ML largely depends on the model, but more so depends on the data. Therefore, the IRBs focus should be weighted more heavily on the data used to train the model, as opposed to the algorithm/model itself. IRBs are more well suited to address data concerns than technology (though, the technology may require additional risk assessment by the IT department). These issues can be addressed using a quality AI HSR checklist, adequate board member training, and adding an AI and data expert to the review board. Ancillary and commercial AI HSR IRB committees are innovative and helpful in their own unique ways, but none of these address the rudimentary issue at the forefront of AI HSR oversight which is that we have the tools and protections in place already. We simply need to better understand and utilize them.

We have a lot of work to do! I’ve created a Artificial Intelligence Human Subjects Research (AI HSR) IRB Reviewer Checklist to get this dialogue started.

You can find this in the Creative Commons under a Attribution-NonCommercial-ShareAlike license. Please feel free to distribute, remix, adapt, and build upon the material for noncommercial purposes only (modified material must be under identical terms).

Artificial Intelligence Human Subjects Research IRB Reviewer Checklist (with AI HSR and Exempt Decision Tree) © 2021 by Tamiko Eto is licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/

What is Artificial Intelligence Human Subject Research (AIHSR)? Defining “human subject” and “generalizable knowledge” in AIHSR Projects

The current regulatory challenges IRBs are facing when reviewing novel technologies, specifically AI, is identifying when the use of AI in research constitutes human subject research. Taking AI as we understand it and the federal definitions of “human subject” and “research” feels like we’re handling a shape sorting toy, where instead of putting a square block into a square hole, we’re trying to shove a misshaped block into a toy that doesn’t even have the shape we’re holding. Before we jump to the conclusion that AI doesn’t fit the current regulatory framework, however, let’s take a look at how it does. 

Defining Human Subject: First and foremost, when we think of AI, we might be thinking “complicated technology” or “algorithms”, but what we need to be thinking is simply “data”. Next, we must understand the difference between human-focused datasets and not human-focused datasets. Identifying these differences from the beginning of the project should help IRB’s and researchers identify what projects fall under their oversight, and which do not. 

Human-focused datasets are just what they say: they are datasets used or created to understand humans, human behavior, or the human condition. Not human-focused datasets, on the other hand, might involve human data. However, the difference is this type of AI research is not meant to help us understand humans, human behavior, or human conditions, and would not generally be considered AI HSR as these usually focus more on products and processes. This is in general alignment with the current framework but differs in that the line isn’t always clear. The reason for that is, oftentimes, the datasets are intended to serve both purposes. In that case, the project should still be considered human-focused. 

Take for example, the datasets collected on social media compared to patient healthcare datasets. Both could technically fall under human focused or not-human focused depending on the intended purpose of the data and/or AI role (i.e., what the AI is intended to accomplish). If the AI is used to help us understand human behavior or health conditions, then we would call it a human-focused dataset. If the focus or role of the AI is solely to improve a platform, product or service, then the project is likely not human subject research. 

Using the current definition provided in the Revised Common Rule, we then need to identify if the project meets the federal definition of “human subjects”. In other words, does the research involve a living individual about whom the investigator obtains information through interaction, and uses studies or analyzes that information? 

Often, IRBs are presented with applications that claim the study is not involving human subjects, or that the data is collected from humans but not “about them”. Rather than take that claim at face value, we need to start with two questions:

1) Is it human-focused data?

2) Is the study intended to contribute generalizable knowledge?

As a vast majority of AI studies are intended to learn and model human behavior, getting these questions at the forefront is key. If the AI is intended to model human behavior these studies generally meet the first part of the federal definition of human subject.

Once we get that squared away, we want to remember the second part of the definition of human subject. As recently introduced through the Revised Common rule, to be human subjects, the PI has to either conduct an intervention or interact with the participant, or they can simply obtain, use, analyze, or generate identifiable private information. One might argue that if the data is neither private or identifiable it is not human subject. But what we are seeing now in many AI studies is that AI is dependent on large datasets and linking datasets to other datasets (both private and public) which opens up the possibility of “generating” identifiable information. We also see the extensive use of biometric data such recorded face or voice print, ocular scans, and even gait, which are all considered identifiable information. Taking these things into consideration will help IRB’s make HSR determinations.

The next question we must ask is if the project meets the federal definition of research. We define research as:

“a systematic investigation including research development, testing, and evaluation designed to develop or contribute to generalizable knowledge.”

What most IRBs are challenged with these days is fitting algorithm development, validation, and evaluation and its role in the larger study within this definition. Here lies the most challenging aspect of making Human Subject Research determinations- it requires a common understanding of what constitutes “generalizable knowledge”. 

For now, we as IRB professionals, understand generalizable knowledge to be:

“information where the intended use of the research findings can be applied to situations and populations beyond the current project.”

With this definition, IRBs can determine, based on the study aims, and role of the AI in achieving those aims, if the project is “research” per the federal definition. However, currently there is no federal definition of “generalizable knowledge” so the determination is made inconsistently and subjectively as a result.

So Now What?
In contrast to the current Common Rule guidelines, the FDA and other regulatory bodies have published quite a bit of guidance around where algorithms fit within their larger framework of Software as a Medical Device and have numerous resources available for IRBs, sponsor investigators, and manufacturers.

So, until a definitive policy or guidance is set for AI HSR under the Common Rule, institutions may want to incorporate into their review processes some of the FDA considerations available now, even if the projects aren’t always FDA regulated. This encourages review consistency across projects as well as to ensure various requirements such as the General Data Protection Regulation (GDPR) or 21 CFR Part 11, if applicable, are being met. Note: flexibility is encouraged, depending on the project, as the protocol may not call for some, or any of these additional protections. 

Summary:
The current regulatory framework, including guidance from the FDA and under which IRBs use in the oversight of human subject research, has been in place for decades and is updated regularly as society and research evolves. Most recently, for example, the Revised Common Rule brought about several changes that were intended to streamline processes and reduce regulatory burdens. As such, while AI as a technology is not new, its use in human subject research is expanding at a rate at which we, as oversight bodies, can no longer use the “wait and see” approach. We are called to take action and are challenged with keeping up with this rapidly changing technology as study designs are beginning to implement it for investigational and non-investigational purposes. Just like we’ve always done, we are being called to look at what we have and how to improve upon it to meet the changing field. I argue that if we start with shared definitions of human subject and “generalizable knowledge”, our mission will be much less challenging.

Blockchain: Are We Ready for it in Human Subject Research (HSR)?

Photo by Oleg Magni on Pexels.com

There is a growing interest in the potential use of blockchain in regulated* Human Subject Research (HSR). Who would have thought this young (roughly 12 years old) distributed ledger technology could ever be used in human subject research? The proposed technology is supposed to be good for promoting data integrity (e.g., transparency and security). Certainly, that kind of technology could attract both researchers and regulatory bodies because of its claims to provide a streamlined, and safe approach to creating databases, obtaining electronic signatures, and sharing the data in a safe manner.

One of the challenges IRBs face with any rapidly evolving technology introduced in HSR is trying to fit the tech into outdated human subject research regulations. To be fair, this challenge is not new. Researchers and IRBs alike still struggle with applying simple and straight forward regulations on studies that don’t even involve technology. For example, what constitutes “human subject” research? At what point does secondary data require IRB oversight and when does it not? Even today, there are multiple interpretations of these seemingly straightforward regulations, yet nationwide, determinations significantly vary depending on what IRB you talk to. You would think that the Revised Common Rule, attempting to harmonize various regulations, would simplify this process, but it didn’t. This, in and of itself, is going to pose a large challenge for broad and compliant utilization of blockchain technology in HSR.

That aside, in my assessment of blockchain use with HSR, there exist two main regulatory concerns:

  • Can it provide adequate protections for data storage and management?
  • Can the participants still be “fully informed” and the technology still allow for their wishes honored in regard to how their data is (or is NOT) used in the proposed study now AND into the future?

In a webinar I hosted for CITIprogram, as well as in subsequent training modules we developed for regulatory bodies and researchers using Artificial Intelligence and Machine Learning (ML), my colleagues and I discussed the regulatory and ethical challenges of rapidly evolving technology such as AI/ML.

Clearly, AI/ML technology is not blockchain, and when we discussed this, our primary focus was on investigational devices and software, but the takeaway is the same: Technology is advancing so rapidly, that researchers will be depending more and more on their Ethics Committees (EC) and Institutional Review Boards (IRB) in trying to fit this constantly changing tech into outdated HSR regulations to meet established “ethical” principles and protections in human subject research.

Blockchain Isn’t an Investigational Device, so Why is Blockchain a Concern?
Since clinical trials and investigational medical devices (including but not limited to Software as Medical Device (SaMD) and mobile medical devices) clearly fit within the HHS Common Rule (45 CFR 46) and FDA (21 CFR 56) regulations, not a lot of people debate their applicability to the regs. Blockchain, on the other hand, isn’t a medical device, but rather technology used for data collection (including electronic signatures) and storage, and that is where the regs, especially 21 CFR Part 11 comes in (see my other post on that here).

There are different opinions on what makes a product “Part 11 compliant”. In one article, Your Software and Devices Are Not HIPAA Compliant (Reinhardt, 2019), for example, the authors claim that “there is no such thing as compliant software or a compliant device”. Charles et al. (2019) add, “blockchain-based technologies cannot be used or adopted for regulated clinical research unless they can demonstrate compliance with the applicable regulations.” So where does that leave blockchain for HSR, or for that matter, any software or device?

Learning About the Technology:
Reinhardt has an interesting point, but I don’t think compliance is “impossible” for just those reasons. It’s a legit reason, for sure, but the larger trouble researchers will unknowingly fall into if they use blockchain (or any advancing technology for that matter), is that most IRBs are unaware of their role in ensuring compliance with the technological aspects of HSR. Most IRBs
misunderstand that role to be solely that of their IT department, leaving significant gaps in regulatory oversight.

That gap means regulatory bodies are less able to mitigate privacy risks (principle of justice) and inform participants of what they are getting into (principle of respect for persons) because they just don’t know (or aren’t including themselves on those aspects of oversight). A simple solution is just embedding those issues into the IRB application.

Another solution is, long before approving or implementing technology in a proposed research study, measures such as educating IRBs, IT support teams, and research personnel on how the technology works and how to apply the regulatory protections is vital. Should blockchain eventually make its way to regulated HSR, the ethical principles that are the foundation of all HSR regulations will certainly come to question.

For example, when addressing the principle of beneficence, how will the researchers balance the risks and benefits of research when there are so many unknowns, such as who can access the data? And will it be identifiable? Will that data be added to other databases making it identifiable, more sensitive, or more “risky”? Will there be limitations on future research, and how will that be controlled? When addressing the principle of Respect for Persons, how does one explain all of this in the context of complicated tech language? Can we expect the lay person to understand how their data will be used in blockchain technology and what control they have or don’t have on that data?

Some technology experts versed in the regs have suggested employing something similar to broad consent (where once a consent form is signed, there are fewer limitations on who can use the data and how they use it), but not all institutions have opted for broad consent because of the challenge of keeping track of all the people who said “No” to certain aspects, and how they will adhere to honoring their wishes across the life of that data. Some experts opportunistically and optimistically argue that blockchain is the answer to those problems. This could be something worth looking into for those institutions that opted into broad consent options, but not a viable option for those that abstain. I, personally, have my reservations.

The Connection Between Ethics and Regulations:
Most of what I’ve seen published on blockchain and human subject research (HSR) narrowly focuses on how to get clinical trials compliant with the Common Rule and FDA regulations. What I’m not seeing a lot of discussion around is how this technology can or will meet the ethical principles and how other (non-clinical) types of research (e.g., Social, Behavioral, and Educational Research) may be affected. In other words, how could blockchain serve those
that aren’t bound to Part 11 but obligated to abide to Common Rule?

The good news is, most of the established regulations for HSR are built on ethical principles (like justice, beneficence, and respect for persons). So, in essence, all the regulations currently in place are like the “skeleton” for how established ethical principles are applied to federally mandated HSR regulations. Because of that, applying ethical principles in advancing technology-related research (such as blockchain or Artificial Intelligence Human Subject Research (AI HSR)) can partially be resolved by ensuring the standard HSR protections governed by HHS, FDA, and other institutional policies are met.

Understanding the Ethical Principles in HSR and How to Apply that in Blockchain
In HSR, the main principles are:

  • Justice
  • Beneficence
  • Respect for Persons

Those terms are very broad. I can’t emphasize how broad those terms are… The principle of Justice deals with a fair distribution of benefits & burdens of research; how participants are selected (targeted for participation in that research); and an assessment of the research benefits in comparison to the risks. The principle of Beneficence deals with not causing harm (or at least
minimizing harm); maximizing the possible benefits; and making an adequate risk assessment. The principle of Respect for Persons deals with respect the autonomy of the participants; protecting those with diminished autonomy; and providing fully informed consent (or at minimum qualifying for a waiver of consent). The problem is a significant misunderstanding of the tech. If you don’t understand the tech, can you make a fair assessment of any of these?

How we interpret those very broad terms is based on the regulatory bodies’ subjective “logical reasoning” and as you can imagine, that can vary significantly depending on who is reviewing the study and how they interpret or misinterpret the regs. Without technology and privacy experts, the determinations will be inconsistent at best. The type of reasoning involved here requires familiarization with the technology and will guide us in  how we apply the main principles in the protection of HSR. In other words, regulatory bodies will need to enforce what we logically believe is required to adequately protect the data/biospecimens in the ways we’ve committed to (via the IRB Approved protocol application and other guiding factors such as the grant requirements, etc.) and we must limit the use of those data/biospecimens accordingly using knowledge about that technology. Without understanding how blockchain works, can we, as regulatory bodies, be assured that blockchain or whatever technology we are using can achieve that aim?

Thinking outside the box

Sometimes advancing technology requires a different oversight approach. Take, for example, AI/ML: the technology advances so rapidly, submitting a modification for review and approval by the FDA and IRB could take months, causing significant delays in research and funding opportunities. To accommodate that technology, the FDA recently introduced an “Algorithm Change Protocol” (also known as a “predetermined change control plan”) that requires the manufacturer to commit to transparency by updating the FDA on what changes were done within the “algorithm change protocol” so that they don’t have to keep going back to the FDA for review every time the algorithm or methods change:

“This plan would include the types of anticipated modifications and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients.”

While this specific approach might not be applicable for blockchain, it invites creativity in new solutions for other technologies introduced into HSR. Are there alternative approaches to being “flexibly” compliant with blockchain? For example, could our main ethical principles, the building blocks of HSR federal regulations, be met via a “checklist” of technical reviews and
IRB process and procedure checks that support the product in meeting compliance requirements? 

There are hundreds of sets of principles for the ethical use of AI. These are published all around the world by numerous institutions, including IEEE’s Ethically Aligned Design and all have considerable contributions to how manufacturers, governments, researchers, and IRBs should approach this advancing technology. These don’t really focus on how to make HSR more “ethical” but rather how to keep the tech “ethical”. As a result, I lean less on “picking and choosing” from the broad sets of principles available and more on how we can even simply meet the current set of HSR ethical principals (listed above). That in itself, is a task, as I learned with AI/ML.

Piloting Blockchain:
Introducing new technology to sensitive research data is not something to take lightly and some experts have suggested piloting HSR using blockchain technology, but even pilot studies carry risk. While I like the pilot approach, I recommend starting out with non-clinical trial (CT) studies and then moving on to CT once all limitations are identified and remedied. This pilot could start with an innocuous single site secondary data de-identified research pilot that utilizes waivers on adults; then advance to consent procedures with adults, then take on semi-sensitive data, and eventually move on to collaborative studies.

Non-Covered Entities: Are They Exempt from Worrying About All This?
So long as the research is funded (directly or indirectly) by federal sources, the regulations and ethical principles apply. In fact, even if an institution is not bound to HSR regulations, there may be institutional or state laws and other policies that apply (e.g., CCPA, GDPR, etc). Additionally, software or hosting companies that store PHI may require a BAA if they are not study collaborator. Study collaborators, as covered or non-covered entities would likely require establishing and adhering to a Data Sharing/Data Use Agreement (DSA/DUA) as well.

Regulatory Considerations for Creating a Research Database:
A Unified Definition of Terms

Identifiable or Not?

Blockchain has the ability to create and maintain research databases. However, there are several types of “databases” and each are treated differently in regulated HSR, depending on the intent of that database. For example, a medical database strictly used for clinical purposes versus a research database made for a specific research protocol will have different requirements. Registries and repositories that are established for broader uses outside of the original research protocol will also require additional processes and protections. When designing a database using blockchain, I recommend requiring some sort of “smart” flowchart that strictly adheres to established definitions of what is considered “identifiable” and flags or requires the relevant regulations and ethical principles accordingly.

What makes something identifiable or not is key. HSR regulations largely depend on that specific definition, so a logic-based approach in building that database would help ensure a database truly meets the criteria of being de-identified or not. We cannot rely on the researchers to make these determinations and most research institutions, as a result, embed policies in their Human Research Protection Program (HRPP) that disallow their researchers to “self-determine” what qualifies as regulated research or not. The reason is because researchers do not share the same definition as their regulatory bodies and oftentimes claim their data is de-identified when it actually doesn’t meet the criteria. For example, biometric data and audio/visual recordings are considered “identifiers”, yet even today many researchers believe that if a database excludes a first name, it automatically becomes “de-identified” even with a multitude of other indirect and direct identifiers.

Intent and Aims of Research:
Intent and aims of research also play an important role in the definition of HSR. When databases (even public ones) that may be considered not-human subject research, are merged, reidentification becomes more likely. In the area of healthcare related research and big data research, these risks are amplified as they are commonly a target of hacking. Researchers must make their long term intentions on the use of the data clear, and submit modification requests to the IRB, if those intentions change.

Blockchain Limitations:
The following are areas that I am still trying to familiarize myself with but nevertheless question. I will update this blog as I become more familiar:

Can authorized users be restricted from accessing the data if they stop being a collaboration site mid-study?

When a study is complete, how would one go about destroying the data or identifiers? If data was collected out of compliance and an EC/IRB asked them to destroy it, or if a PI wanted to move all data to another institution and remove the old institution, could that be done?

Some studies are focused primarily on “watching” prospectively collected data “live” (as it is collected via EHR, etc.) and training algorithms to identify triggers that could be a sign of requiring a life-saving intervention. Are there delays in communicating study results to the research team, if a study relies primarily on blockchain systems used to collect, store, and distribute data for research purposes? If so, could that risk outweigh the benefit?

With or without blockchain technology, data entry error poses a problem for data integrity, and that problem is exacerbated with immutable blockchains. What then happens to the erroneous data? Would users be redirected to the “fixed data” instead of replacing it, via the “linked chain by uniform resource locators”?

If a breach were to occur, could the investigation be conducted appropriately? Another approach would be to put limits on combining secondary de-identified datasets with other sets (unless a ethics committee and technology expert serves on the board and approves the safety of that combined dataset?) should be required for studies involving sensitive data (alcoholism and drug studies, HIV studies, sexual orientation, etc.)

We need to keep in mind group level risks and protections and not narrowly focus on individual risks (specific ethnic groups, etc.)

Who will be responsible for paying for this technology and who would be responsible for maintaining it and validating it regularly? I’d imagine that using it and all the computational power and paying the people to run that technology would cost quite a bit, especially in the long-run.

Researchers have low desire to maintain research records for more than the length of time required per sponsor and federal retention policies require. How would immutable technology be found desirable to researchers that have short term projects?

*Research subject to 45 CFR 46 or 21 CFR 46 regulations.

RESOURCES:

https://www.frontiersin.org/articles/10.3389/fbloc.2019.00018/full
https://www.leewayhertz.com/cost-of-blockchain-implementation/
https://azati.ai/how-much-does-it-cost-to-blockchain/
https://www.tameyourpractice.com/blog/your-software-and-devices-are-not-hipaa-compliant/

What is Part 11 and How Do I Comply?

This is my interpretation of:

• When Part 11 is required
• What Part 11 is asking for
• How PIs, IRBs, and their IT Support team can work together in meet Part 11 requirements

The goal of this piece is to provide my interpretation and analysis of Part 11 and how IRBs, PIs, and the IT support team can work together in applying it to HHS-regulated Human Subject Research (HSR). You can also just skip all this below and just view the power point presentation below:

Getting Your Platform Part 11 Compliant
Background Information

21 CFR Part 11 is not new to research. It actually became effective way back in 1997. However, with rapidly evolving technology and the sudden shift in research going virtual due to COVID-19, researchers are scrambling to move in person interventions and informed consent online. Prior to that, in 2019, HHS revised the Common Rule (45 CFR 46) in many ways- one of which was how signatures could be obtained. The “Revised Common Rule” now allowed researchers to, in some (not all cases) treat wet signatures the same as electronic signatures. Unfortunately, some of the Revised Common Rule changes were not carried over into similar regulations held by the FDA (21 CFR 56). That means that if a study is not FDA-regulated, researchers only need to focus on one set of regulations. That also means that if a HHS-regulated study is also FDA-regulated (often found in clinical trials), two different requirements for electronic signature (and electronic records) apply. Part 11 is the FDA’s Guidance on how to meet these requirements.

Part 11 Involves a Lot of Technical Stuff. Does the IRB Have to Get Involved?
(What Needs Done and Who Does What?)
Getting compliant requires the collaborative effort of the study team and Sponsor, IT support team, and the IRB. However, because the requirements largely involve technical documentation, validation, and testing, there is a lot of confusion as to who does what and how they go about doing it in order to get their platforms and software used in research compliant with Part 11. Some IRBs may believe that getting Part 11 compliant is a job entirely up to the IT office of the institution. Similarly, some IT support teams attempt to meet all the technical requirements, but assume the rest (processes and procedures) are up to the IRB and study team, leaving gaps in human subject protections. Sponsors may even push all the responsibilities on the study team. This makes for a frustrating, and oftentimes, overly complicated and time-consuming activity that prevents research from getting started in a timely manner.

The following table describes (my interpretation of) what must be addressed and documented in the research procedures when preparing the platform/software for Part 11 compliance, as well as who takes on what role to make that all happen:

 Documentation ItemDescriptionPI / Study Team/SponsorIRBIT Support
1ValidationDocumenting procedures for internal and external auditors to show how the system can be trusted (how it’s accurate, reliable, shows consistently performing as intended, and able to discern invalid or altered records)(i) Follow IT guidance;     (ii) document on IRB Application and study files how this will be doneEnsure description is on the IRB ApplicationProvide guidance to PI on how this can be done
2Record Maintenance and RetentionMaking sure all electronic records can be provided in language and format that humans (not just computers) understand(i) Follow IT guidance;     (ii) document in study files how this will be doneN/AProvide guidance to PI on how this can be done
3System AccessEnsuring only authorized individuals have access to the system(i) Follow IT guidance;     (ii) document on IRB Application and study files how this will be doneEnsure description is on the IRB ApplicationProvide guidance to PI on how this can be done
4Record Retention (Storage and Maintenance)Protecting documentation and making it readily available if needed for auditing or other reasons, as well as stored for the required duration(i) Follow IT guidance;     (ii) document on IRB Application and study files how this will be doneEnsure storage and maintenance duration and procedures are on IRB applicationProvide guidance to PI on how this can be done
5Study Personnel Roles and AccountabilityHolding individuals accountable for their actions related to electronic records and signaturesDocument in study files how this will be doneReview of study personnel qualifications and training, and description of study team roles is on IRB ApplicationN/A
6WorkflowsMaking sure (via operational system checks) electronic workflows function correctly and as expected(i) Follow IT guidance;     (ii) document in study files how this will be doneN/AProvide guidance to PI on how this can be done
7Authority ChecksLimiting user access (at both system and record level) and verifying each user only performs their authorized function(s)(i) Follow IT guidance;     (ii) document on IRB Application and study files how this will be doneReview of (i) Protocol Application, (ii) study personnel qualifications and training, and (iii) description of study team roles is on IRB ApplicationProvide guidance to PI on how this can be done
8Study Team QualificationsConfirming study team qualifications and training are complete and relevant(i) Follow IT guidance;     (ii) document on IRB Application and study files how this will be doneReview of (i) study personnel qualifications and training, and (ii) description of study team roles is on IRB ApplicationN/A
9Device ChecksVerifying the validity of source of data input and proper operational functions(i) Follow IT guidance;     (ii) document in study files how this will be doneN/AProvide guidance to PI on how this can be done
19Document ControlControl of documents for system operation and maintenance, including preservation of complete history of changes made to these documents(i) Follow IT guidance;     (ii) document in study files how this will be doneN/AProvide guidance to PI on how this can be done

Some hints, tips, and resources:

  • For more information on getting various platforms compliant, I found this video really helpful.
  • There is no such thing as an “out of the box” compliant software, so start talking to your Sponsor, IRB, and IT support team early in the research planning phase (don’t wait until the last minute or you may end up delaying the start of research).
  • UCSF had a great guidance on getting compliant on their website.

Applying Ethics in The Review of Research

ethics

To what extent do research ethical concerns fall under the purview of an IRB?

I was recently asked this question at a round-table. It is largely a broad question which requires a broad answer.

In many ways ethical concerns fall heavily under the purview of the IRB, and in some ways they do not. We have several “ethical principles” that are currently applied in our review processes (e.g., Nuremberg Code, Declaration of Helsinki, Belmont Report, 21st Century Cures Act, 2017 changes to the Common Rule, etc.).

These guidance and regulations governing research with human subjects have evolved in sophistication and complexity and will continue to evolve frequently, as we evolve technologically.

The core ethical values of the Nuremberg Code and the Belmont principles (respect for persons, beneficence, and justice) guide the design and review of ethical research.
“How” we apply these principles however, is primarily done on a study-by-study basis. Naturally, invasive procedures and drug studies are likely to have more scrutiny applied to them compared to a minimal risk study.

The combination of these principles with federal regulations (DHHS/OHRP/Common Rule and FDA, as well as other federal agency’s individual regulations) create a structure that accommodate our desire as a society to expand our scientific knowledge, while at the same time enabling us  to, and ensuring that we protect and show compassion for the people who volunteer for our research studies.

It’s important to acknowledge that regulations on their own do not guarantee that a study will be ethical. However, these regulations do guide us (ethics committees, IRBs, as well as the investigators themselves) to rely on human judgment in the design of research, how IRBs/ECs make decisions, and how investigators interact with their human subjects. Together, we (PIs, IRBs, etc.) drive research forward.

The IRB member may be strongly opposed to the research on personal grounds, which may be valid and important for the Board to consider. It should not, however, come between the review and approval of the research. This is one reason why we have a diverse Board. If every single member on that board shared the same concern, then the issue is clearly one to be examined carefully and may result in delays in approval.

While it would be inappropriate to disapprove a study on the grounds of one person thinking it was unethical in some way, it is expected and appropriate for reviewers/board members to voice any ethical concerns they may have (or any concerns for that matter.)

To clarify the application of ethics in research review, IRB’s aren’t here to “pass judgment” but rather to apply the ethical principles and ensure the research falls under compliance with applicable laws and regulations while protecting the rights and welfare of human subjects.