Overcoming Obstacles and Reaping Benefits While Protecting Patients
While the stated objective (improving the quality and efficiency of care) was noble, the response to the partnership was obscured in its own cloud of mistrust in Google’s announced commitment to keep these data private and separate from the company’s other data and also of questions regarding the propriety of health care organizations sharing patient data with companies. After the story became public and federal regulators announced their intent to investigate the arrangement, leaders at both organizations issued statements defending the deal.
This was not the first time that data-sharing arrangements between health care delivery organizations and digital companies has generated public controversy. A partnership announced in 2016 between Google subsidiary DeepMind and England’s Royal Free NHS Trust raised concerns, as did a Google collaboration with University of California San Francisco, Stanford University, and the University of Chicago, announced in 2017. The latter case led to a lawsuit against the University of Chicago and Google, recently discussed by Cohen and Mello. They noted that these data-sharing arrangements illustrate the need to modernize the Health Insurance Portability and Accountability Act (HIPAA), which they suggested is “a 20th-century statute ill equipped to address 21st-century data practices.”
Although the arrangement between Google and Ascension appears to comply with HIPAA and other relevant regulations, the concerns illustrate how fraught such data-sharing arrangements between health care delivery organizations and large digital companies can be and highlight the need for new standards that allow society to leverage the opportunities that technology offers patients while ensuring privacy.
The challenges are a function of medicine’s relatively recent digitization. Remarkably, a decade ago fewer than 1 in 10 US hospitals collected and stored patient data in digital form. Today, virtually all US hospitals do, largely in electronic health records (EHRs). While EHRs have garnered criticism, the digitization of the medical record has, on balance, been a positive development. Clinicians’ notes are now legible, and most prescriptions can be sent to pharmacies electronically. Many EHRs have basic alerting functions (such as for medication allergies), some are beginning to integrate more sophisticated clinical decision support (eg, for sepsis alerts), and most connect (with varying degrees of ease) to patient-facing portals and to tools and apps built by third parties.
While patient data are collected locally in EHRs, virtually all hospitals and clinics enter into what are commonly known as business associate agreements to share certain data with outside vendors who agree to keep data secure as they analyze it for various purposes, ranging from submitting a bill to studying care patterns to identify opportunities for improvement. This form of data sharing is entirely legal. However, new developments are changing the context for these agreements, creating the need for updated rules and ethical standards that will permit appropriate data sharing while ensuring the level of security and transparency that patients and regulators rightfully expect.
First, the ever-present threat of hacking requires that data collected by health care systems be stored in industrial-strength off-site clouds rather than in on-site servers. This is not unique to health care. Financial data are stored using such methods, as are highly sensitive data collected by government intelligence agencies and the military. Most data of this type are stored in clouds built and managed by 1 of 3 digital giants (Amazon, Google, or Microsoft)—generating concerns among some even when the arrangements are purported to be solely about data storage.
Second, advances in machine learning and artificial intelligence that have transformed other industries are rapidly coming to medicine. Medical artificial intelligence is now poised to suggest appropriate diagnoses, cue the physician about changes in a patient’s status, or choose the most effective cancer treatment based on the patient’s genetic makeup and tumor characteristics. While it is currently unknown whether these technologies will ultimately improve patient outcomes or lower costs, the demand for them is certain to increase. The problem is that few hospitals and clinics can afford the equipment and experts needed to apply these techniques to their clinical data, and the digital companies that have these capabilities need data drawn from actual patients.
These factors are leading to more partnerships between health care delivery organizations and digital companies. Yet trust in the digital giants, particularly consumer-facing companies such as Google and Facebook, is low and it is rapidly declining. A combination of thoughtful regulation and clear ethical principles is needed to help strengthen public confidence in data-sharing arrangements that have the potential to improve health care outcomes.
Transparency is crucial. The response to and perception of the arrangement between Ascension and Google might have been different had information about the arrangement come to light through a public announcement explaining the potential benefits and addressing potential concerns rather than via a journalist’s exposé of a codeword-protected arrangement. For instance, the September 2019 public announcement of a partnership between Google and the Mayo Clinic generated relatively little controversy.
Moreover, patients have a right to participate in discussions about how their data will be used. Here too, the announcement might have been received differently had the arrangement between Ascension and Google been vetted and meaningfully shaped by a visible patient-community advisory group.
Further, the issue of identified vs deidentified data is central. Health care professionals need to be able to securely store data that includes relevant patient information in the clouds of digital companies. But for arrangements in which such companies are analyzing the data for research purposes or to create new products, the information they receive should be stripped of all elements that could identify an individual patient (eg, name, birthday, address). The science of deidentifying patient data is advancing but so is the ability of sophisticated computer systems to reidentify patients from deidentified information. There is some hope that new techniques to construct “synthetic patients,” which capture all the patterns and associations of actual patients but are immune to reidentification, will replace deidentification as a way to safely share data.
In addition, the issue of individual patient consent always arises in discussions about data sharing. While ideal, operationalizing consent processes would be challenging, particularly because it is not clear how to provide care for a patient who did not agree to data sharing when a health care system routinely works with business partners on issues such as billing and quality improvement. For some patients (eg, those who have dementia), it may be impossible to obtain consent as traditionally defined, yet it would be important to analyze their data. Notwithstanding the challenges, for projects that involve research rather than day-to-day operations, finding a feasible model for patient consent should be explored. Cohen and Mello favor the use of institutionally based, broadly representative data access committees that include patients specifically trained for their oversight roles.4 This is a complex issue, and it would be worth bringing together interested groups (including patients and patient advocates) to consider such models.
Other issues that need discussion revolve around the economics of these partnerships. Should the financial arrangements between health care organizations and digital companies be disclosed? Should patients benefit financially from sharing their data or from any income generated by their data? Should individual employees of health care institutions benefit financially from data sharing? This last question was at the core of a controversy involving a pathology artificial intelligence company created by leaders at Memorial Sloan Kettering.
An important concern about the public reaction to health care data sharing, as illustrated by the Ascension-Google case, is that such arrangements will be made illegal, or that the political fallout will cause risk-averse health care organizations to eschew them. While some might see this as a positive outcome, ultimately it could prevent patients and clinicians from realizing improvements in quality, safety, convenience, and efficiency that will be facilitated by intelligently created and ethically delivered artificial intelligence and machine learning. Health care needs the help of digital experts and technologies but also needs clear regulations and ethical standards to ensure that inappropriate data sharing is precluded, that useful data sharing is facilitated, and that patients’ personal health information is protected.
Corresponding Author: Robert M. Wachter, MD, Department of Medicine, University of California, San Francisco, 505 Parnassus Ave, Room M994, San Francisco, CA 94143 (robert.wachter@ucsf.edu).
Published Online: January 16, 2020. doi:10.1001/jama.2019.21215
No comments:
Post a Comment