IRBs tread lightly when it comes to the oversight of AI human subject research (AI HSR). This may be due to insufficient understanding of when AI research involves human subjects. It may also be in fear of committing scope creep (who’s role is it to ensure responsible and ethical AI in human subjects research?). Admirably, in response, some have proposed the establishment of commercial AI Ethics Committees, while others try to fit AI ethics review into an ancillary review process. Ancillary AI ethics committees either take on the look and feel of a scientific review committee or treat the process like an IBC or SCRO committee. I argue that IRBs can (and should) fit AI HSR within their current IRB framework in many significant and meaningful ways without committing scope creep.
Admittedly, the current framework has limitations, regardless of if it is AI HSR or any other type of research. However, moving AI HSR oversight to an ancillary committee is not an efficient solution for researchers who will still have to navigate their way through the IRB for their projects in addition to these extra bureaucratic hoops. Ancillary AI HSR committees only delay the process to approval and disincentivize compliance. Rather than build a new AI HSR IRB or ancillary review committee, we need to provide and require the AI HSR education/training of IRB administration and remind the IRB of its duty to ensure a relevant experts sit on the Board when reviewing specific research.
While it may be ideal for institutions with no IRB to outsource their reviews, for institutions with a home IRB, there are multiple downsides to outsourcing AIHSR oversight. Below are a few that come to mind:
1) Cost: The study team may need to plan for additional funding if the review isn’t free (i.e., when it isn’t done in-house). Additional reviews for modifications or annual renewals may be required, which would add to that cost.
2) Duplication of Effort: An AI Research Review Committee (AIRC) typically acts as an ancillary review to IRB review. However, many if not all of the issues reviewed would parallel IRB review and cause duplication of effort, time and money.
3) No binding regulatory power: If an AIRC (or any AI ancillary review) has recommended changes to the protocol, the committee likely won’t have any regulatory “teeth”. This means that the researchers will not be required or inclined to comply with their “suggestions”. Additionally, these suggestions may or may not make their way to the IRB unless there is infrastructure established that keeps the two committees “talking to each other”.
4) Sustainability: Need to develop a sustainable administrative process for the committee in regard to.
The key to AI HSR ethical review and research compliance oversight is the need to focus on the data. AI/ML largely depends on the model, but more so depends on the data. Therefore, the IRBs focus should be weighted more heavily on the data used to train the model, as opposed to the algorithm/model itself. IRBs are more well suited to address data concerns than technology (though, the technology may require additional risk assessment by the IT department). These issues can be addressed using a quality AI HSR checklist, adequate board member training, and adding an AI and data expert to the review board. Ancillary and commercial AI HSR IRB committees are innovative and helpful in their own unique ways, but none of these address the rudimentary issue at the forefront of AI HSR oversight which is that we have the tools and protections in place already. We simply need to better understand and utilize them.
We have a lot of work to do! I’ve created a Artificial Intelligence Human Subjects Research (AI HSR) IRB Reviewer Checklist to get this dialogue started.
You can find this in the Creative Commons under a Attribution-NonCommercial-ShareAlike license. Please feel free to distribute, remix, adapt, and build upon the material for noncommercial purposes only (modified material must be under identical terms).