Those involved in the development, testing and deployment of technologies used in the digital health research sector include technology developers or ‘tool makers’, funders, researchers, research participants and journal editors. As the ‘Wild West’ of digital health research moves forward, it is important to recognize who is involved, and to identify how each party can and should take responsibility to advance the ethical practices of this work.
Who is involved?
In the twentieth century, research was carried out by scientists and engineers affiliated with academic institutions in tightly controlled environments. Today, biomedical and behavioral research is still carried out by trained academic researchers; however, they are now joined by technology giants, startup companies, non-profit organizations, and everyday citizens (e.g. do-it-yourself, quantified self). The biomedical research sector is now very different, but the lines are also blurred because the kind of product research carried out by the technology industry has, historically, not had to follow the same rules to protect research participants. As a result, there is potential for elevated risks of harm. Moreover, how and whether research is carried out to assess a product’s effectiveness is variable in terms of standards and methods, and, when the technology has health implications, standards become critically important. In addition, not all persons who initiate research are regulated or professionally trained to design studies. Specific to regulations, academic research environments require the involvement of an ethics board (known as an institutional review board [IRB] in the USA, and a research ethics committee [REC] in the UK and European Union). The IRB review is a federal mandate for entities that receive US federal funding to conduct health research. The ethics review is a peer review process to evaluate proposed research, and identify and reduce potential risks that research participants may experience. Having an objective peer review process is not a requirement for technology giants, startup companies or by those who identify with the citizen science community [10, 21]; however, we have a societal responsibility to get this right.
What questions should be asked?
When using digital health technologies, a first step is to ask whether the tools, be they apps or sensors or AI applied to large data sets, have demonstrated value with respect to outcomes. Are they clinically effective? Do they measure what they purport to measure (validity) consistently (reliability)? For example, a recent review of the predictive validity of models for suicide attempts and death found that most are currently less than 1%; a number at which they are not yet deemed to be clinical viable [22]. Will these innovations also improve access to those at highest risk of health disparities? To answer these questions, it is critical that all involved in the digital health ecosystem do their part to ensure the technologies are designed and scientifically tested in keeping with accepted ethical principles; be considerate of privacy, effectiveness, accessibility, utility; and have sound data management practices. However, government agencies, professional associations, technology developers, academic researchers, technology startups, public organizations and municipalities may be unaware of what questions to ask, including how to evaluate new technologies. In addition, not all tools being used in the digital health ecosystem undergo rigorous testing, which places the public at risk of being exposed to untested and potentially flawed technologies.
Demonstrating value must be a precursor to the use of any technologies that claim to improve clinical treatment or population health. Value is based on the product being valid and reliable, which means that scientific research is needed before a product is deployed within the health sector [12]. We should also not move ahead assuming that privacy and the technology revolution are mutually exclusive. We are in a precarious position in which, without standards to shape acceptable and ethical practices, we collectively run the risk of harming those who stand to benefit most from digital health tools.
Decision-making framework
While there are discussions about the need for regulations and laws, and incremental progress being made on that front, until some consensus is reached, it is essential that stakeholders recognize their obligation to promote the integrity of digital health research [23]. The digital health decision-making domains framework (Fig. 1) was developed to help researchers make sound decisions when selecting digital technologies for use in health research [24, 25]. While originally developed for researchers, this framework is applicable to various stakeholders who might evaluate and select digital technologies for use in health research and healthcare. The framework comprises five domains: 1, Participant Privacy; 2 Risks and Benefits; 3, Access and Usability; 4, Data Management; and 5, Ethical Principles. These five domains are presented as intersecting relationships.
The domains in this framework were developed into a checklist tool to further facilitate decision-making. The checklist was informed via developmental research involving a focus group discussion, and a design exercise with behavioral scientists [25]. To demonstrate how the decision-making domains can be put into practice, we present a use case to illustrate the complexities and nuances that are important for stakeholders to consider.