There is a growing interest in the potential use of blockchain in regulated* Human Subject Research (HSR). Who would have thought this young (roughly 12 years old) distributed ledger technology could ever be used in human subject research? The proposed technology is supposed to be good for promoting data integrity (e.g., transparency and security). Certainly, that kind of technology could attract both researchers and regulatory bodies because of its claims to provide a streamlined, and safe approach to creating databases, obtaining electronic signatures, and sharing the data in a safe manner.
One of the challenges IRBs face with any rapidly evolving technology introduced in HSR is trying to fit the tech into outdated human subject research regulations. To be fair, this challenge is not new. Researchers and IRBs alike still struggle with applying simple and straight forward regulations on studies that don’t even involve technology. For example, what constitutes “human subject” research? At what point does secondary data require IRB oversight and when does it not? Even today, there are multiple interpretations of these seemingly straightforward regulations, yet nationwide, determinations significantly vary depending on what IRB you talk to. You would think that the Revised Common Rule, attempting to harmonize various regulations, would simplify this process, but it didn’t. This, in and of itself, is going to pose a large challenge for broad and compliant utilization of blockchain technology in HSR.
That aside, in my assessment of blockchain use with HSR, there exist two main regulatory concerns:
- Can it provide adequate protections for data storage and management?
- Can the participants still be “fully informed” and the technology still allow for their wishes honored in regard to how their data is (or is NOT) used in the proposed study now AND into the future?
In a webinar I hosted for CITIprogram, as well as in subsequent training modules we developed for regulatory bodies and researchers using Artificial Intelligence and Machine Learning (ML), my colleagues and I discussed the regulatory and ethical challenges of rapidly evolving technology such as AI/ML.
Clearly, AI/ML technology is not blockchain, and when we discussed this, our primary focus was on investigational devices and software, but the takeaway is the same: Technology is advancing so rapidly, that researchers will be depending more and more on their Ethics Committees (EC) and Institutional Review Boards (IRB) in trying to fit this constantly changing tech into outdated HSR regulations to meet established “ethical” principles and protections in human subject research.
Blockchain Isn’t an Investigational Device, so Why is Blockchain a Concern?
Since clinical trials and investigational medical devices (including but not limited to Software as Medical Device (SaMD) and mobile medical devices) clearly fit within the HHS Common Rule (45 CFR 46) and FDA (21 CFR 56) regulations, not a lot of people debate their applicability to the regs. Blockchain, on the other hand, isn’t a medical device, but rather technology used for data collection (including electronic signatures) and storage, and that is where the regs, especially 21 CFR Part 11 comes in (see my other post on that here).
There are different opinions on what makes a product “Part 11 compliant”. In one article, Your Software and Devices Are Not HIPAA Compliant (Reinhardt, 2019), for example, the authors claim that “there is no such thing as compliant software or a compliant device”. Charles et al. (2019) add, “blockchain-based technologies cannot be used or adopted for regulated clinical research unless they can demonstrate compliance with the applicable regulations.” So where does that leave blockchain for HSR, or for that matter, any software or device?
Learning About the Technology:
Reinhardt has an interesting point, but I don’t think compliance is “impossible” for just those reasons. It’s a legit reason, for sure, but the larger trouble researchers will unknowingly fall into if they use blockchain (or any advancing technology for that matter), is that most IRBs are unaware of their role in ensuring compliance with the technological aspects of HSR. Most IRBs
misunderstand that role to be solely that of their IT department, leaving significant gaps in regulatory oversight.
That gap means regulatory bodies are less able to mitigate privacy risks (principle of justice) and inform participants of what they are getting into (principle of respect for persons) because they just don’t know (or aren’t including themselves on those aspects of oversight). A simple solution is just embedding those issues into the IRB application.
Another solution is, long before approving or implementing technology in a proposed research study, measures such as educating IRBs, IT support teams, and research personnel on how the technology works and how to apply the regulatory protections is vital. Should blockchain eventually make its way to regulated HSR, the ethical principles that are the foundation of all HSR regulations will certainly come to question.
For example, when addressing the principle of beneficence, how will the researchers balance the risks and benefits of research when there are so many unknowns, such as who can access the data? And will it be identifiable? Will that data be added to other databases making it identifiable, more sensitive, or more “risky”? Will there be limitations on future research, and how will that be controlled? When addressing the principle of Respect for Persons, how does one explain all of this in the context of complicated tech language? Can we expect the lay person to understand how their data will be used in blockchain technology and what control they have or don’t have on that data?
Some technology experts versed in the regs have suggested employing something similar to broad consent (where once a consent form is signed, there are fewer limitations on who can use the data and how they use it), but not all institutions have opted for broad consent because of the challenge of keeping track of all the people who said “No” to certain aspects, and how they will adhere to honoring their wishes across the life of that data. Some experts opportunistically and optimistically argue that blockchain is the answer to those problems. This could be something worth looking into for those institutions that opted into broad consent options, but not a viable option for those that abstain. I, personally, have my reservations.
The Connection Between Ethics and Regulations:
Most of what I’ve seen published on blockchain and human subject research (HSR) narrowly focuses on how to get clinical trials compliant with the Common Rule and FDA regulations. What I’m not seeing a lot of discussion around is how this technology can or will meet the ethical principles and how other (non-clinical) types of research (e.g., Social, Behavioral, and Educational Research) may be affected. In other words, how could blockchain serve those
that aren’t bound to Part 11 but obligated to abide to Common Rule?
The good news is, most of the established regulations for HSR are built on ethical principles (like justice, beneficence, and respect for persons). So, in essence, all the regulations currently in place are like the “skeleton” for how established ethical principles are applied to federally mandated HSR regulations. Because of that, applying ethical principles in advancing technology-related research (such as blockchain or Artificial Intelligence Human Subject Research (AI HSR)) can partially be resolved by ensuring the standard HSR protections governed by HHS, FDA, and other institutional policies are met.
Understanding the Ethical Principles in HSR and How to Apply that in Blockchain
In HSR, the main principles are:
- Justice
- Beneficence
- Respect for Persons
Those terms are very broad. I can’t emphasize how broad those terms are… The principle of Justice deals with a fair distribution of benefits & burdens of research; how participants are selected (targeted for participation in that research); and an assessment of the research benefits in comparison to the risks. The principle of Beneficence deals with not causing harm (or at least
minimizing harm); maximizing the possible benefits; and making an adequate risk assessment. The principle of Respect for Persons deals with respect the autonomy of the participants; protecting those with diminished autonomy; and providing fully informed consent (or at minimum qualifying for a waiver of consent). The problem is a significant misunderstanding of the tech. If you don’t understand the tech, can you make a fair assessment of any of these?
How we interpret those very broad terms is based on the regulatory bodies’ subjective “logical reasoning” and as you can imagine, that can vary significantly depending on who is reviewing the study and how they interpret or misinterpret the regs. Without technology and privacy experts, the determinations will be inconsistent at best. The type of reasoning involved here requires familiarization with the technology and will guide us in how we apply the main principles in the protection of HSR. In other words, regulatory bodies will need to enforce what we logically believe is required to adequately protect the data/biospecimens in the ways we’ve committed to (via the IRB Approved protocol application and other guiding factors such as the grant requirements, etc.) and we must limit the use of those data/biospecimens accordingly using knowledge about that technology. Without understanding how blockchain works, can we, as regulatory bodies, be assured that blockchain or whatever technology we are using can achieve that aim?
Thinking outside the box
Sometimes advancing technology requires a different oversight approach. Take, for example, AI/ML: the technology advances so rapidly, submitting a modification for review and approval by the FDA and IRB could take months, causing significant delays in research and funding opportunities. To accommodate that technology, the FDA recently introduced an “Algorithm Change Protocol” (also known as a “predetermined change control plan”) that requires the manufacturer to commit to transparency by updating the FDA on what changes were done within the “algorithm change protocol” so that they don’t have to keep going back to the FDA for review every time the algorithm or methods change:
“This plan would include the types of anticipated modifications and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients.”
While this specific approach might not be applicable for blockchain, it invites creativity in new solutions for other technologies introduced into HSR. Are there alternative approaches to being “flexibly” compliant with blockchain? For example, could our main ethical principles, the building blocks of HSR federal regulations, be met via a “checklist” of technical reviews and
IRB process and procedure checks that support the product in meeting compliance requirements?
There are hundreds of sets of principles for the ethical use of AI. These are published all around the world by numerous institutions, including IEEE’s Ethically Aligned Design and all have considerable contributions to how manufacturers, governments, researchers, and IRBs should approach this advancing technology. These don’t really focus on how to make HSR more “ethical” but rather how to keep the tech “ethical”. As a result, I lean less on “picking and choosing” from the broad sets of principles available and more on how we can even simply meet the current set of HSR ethical principals (listed above). That in itself, is a task, as I learned with AI/ML.
Piloting Blockchain:
Introducing new technology to sensitive research data is not something to take lightly and some experts have suggested piloting HSR using blockchain technology, but even pilot studies carry risk. While I like the pilot approach, I recommend starting out with non-clinical trial (CT) studies and then moving on to CT once all limitations are identified and remedied. This pilot could start with an innocuous single site secondary data de-identified research pilot that utilizes waivers on adults; then advance to consent procedures with adults, then take on semi-sensitive data, and eventually move on to collaborative studies.
Non-Covered Entities: Are They Exempt from Worrying About All This?
So long as the research is funded (directly or indirectly) by federal sources, the regulations and ethical principles apply. In fact, even if an institution is not bound to HSR regulations, there may be institutional or state laws and other policies that apply (e.g., CCPA, GDPR, etc). Additionally, software or hosting companies that store PHI may require a BAA if they are not study collaborator. Study collaborators, as covered or non-covered entities would likely require establishing and adhering to a Data Sharing/Data Use Agreement (DSA/DUA) as well.
Regulatory Considerations for Creating a Research Database:
A Unified Definition of Terms
Identifiable or Not?
Blockchain has the ability to create and maintain research databases. However, there are several types of “databases” and each are treated differently in regulated HSR, depending on the intent of that database. For example, a medical database strictly used for clinical purposes versus a research database made for a specific research protocol will have different requirements. Registries and repositories that are established for broader uses outside of the original research protocol will also require additional processes and protections. When designing a database using blockchain, I recommend requiring some sort of “smart” flowchart that strictly adheres to established definitions of what is considered “identifiable” and flags or requires the relevant regulations and ethical principles accordingly.
What makes something identifiable or not is key. HSR regulations largely depend on that specific definition, so a logic-based approach in building that database would help ensure a database truly meets the criteria of being de-identified or not. We cannot rely on the researchers to make these determinations and most research institutions, as a result, embed policies in their Human Research Protection Program (HRPP) that disallow their researchers to “self-determine” what qualifies as regulated research or not. The reason is because researchers do not share the same definition as their regulatory bodies and oftentimes claim their data is de-identified when it actually doesn’t meet the criteria. For example, biometric data and audio/visual recordings are considered “identifiers”, yet even today many researchers believe that if a database excludes a first name, it automatically becomes “de-identified” even with a multitude of other indirect and direct identifiers.
Intent and Aims of Research:
Intent and aims of research also play an important role in the definition of HSR. When databases (even public ones) that may be considered not-human subject research, are merged, reidentification becomes more likely. In the area of healthcare related research and big data research, these risks are amplified as they are commonly a target of hacking. Researchers must make their long term intentions on the use of the data clear, and submit modification requests to the IRB, if those intentions change.
Blockchain Limitations:
The following are areas that I am still trying to familiarize myself with but nevertheless question. I will update this blog as I become more familiar:
Can authorized users be restricted from accessing the data if they stop being a collaboration site mid-study?
When a study is complete, how would one go about destroying the data or identifiers? If data was collected out of compliance and an EC/IRB asked them to destroy it, or if a PI wanted to move all data to another institution and remove the old institution, could that be done?
Some studies are focused primarily on “watching” prospectively collected data “live” (as it is collected via EHR, etc.) and training algorithms to identify triggers that could be a sign of requiring a life-saving intervention. Are there delays in communicating study results to the research team, if a study relies primarily on blockchain systems used to collect, store, and distribute data for research purposes? If so, could that risk outweigh the benefit?
With or without blockchain technology, data entry error poses a problem for data integrity, and that problem is exacerbated with immutable blockchains. What then happens to the erroneous data? Would users be redirected to the “fixed data” instead of replacing it, via the “linked chain by uniform resource locators”?
If a breach were to occur, could the investigation be conducted appropriately? Another approach would be to put limits on combining secondary de-identified datasets with other sets (unless a ethics committee and technology expert serves on the board and approves the safety of that combined dataset?) should be required for studies involving sensitive data (alcoholism and drug studies, HIV studies, sexual orientation, etc.)
We need to keep in mind group level risks and protections and not narrowly focus on individual risks (specific ethnic groups, etc.)
Who will be responsible for paying for this technology and who would be responsible for maintaining it and validating it regularly? I’d imagine that using it and all the computational power and paying the people to run that technology would cost quite a bit, especially in the long-run.
Researchers have low desire to maintain research records for more than the length of time required per sponsor and federal retention policies require. How would immutable technology be found desirable to researchers that have short term projects?
*Research subject to 45 CFR 46 or 21 CFR 46 regulations.
RESOURCES:
https://www.frontiersin.org/articles/10.3389/fbloc.2019.00018/full
https://www.leewayhertz.com/cost-of-blockchain-implementation/
https://azati.ai/how-much-does-it-cost-to-blockchain/
https://www.tameyourpractice.com/blog/your-software-and-devices-are-not-hipaa-compliant/