By Beth Pitman and Shannon Britton Hartsfield
Hardly a day seems to pass that we don't hear about new advances in artificial intelligence (AI.) The healthcare industry has seen its share of technological developments related to the use of AI to improve the quality and efficiency of patient care. A recent high-profile lawsuit, Dinerstein v. Google LLC et al, serves to remind us that new technology is still governed by long-standing laws and regulations, including the Health Insurance Portability and Accountability Act (HIPAA).
Back in 2017, Google teamed up with a hospital to develop and test AI technology to study how electronic medical records could be used to improve healthcare quality. The collaboration focused on the use of machine-learning techniques to predict hospitalizations and identify instances of declining patient health. To build the algorithm, Google and the hospital exchanged a "limited data set" of patient information. A limited data set is a term defined in HIPAA, and it is protected health information that excludes certain direct identifiers, such as the name, address, Social Security number and medical record number. It is not completely de-identified, but it can include a limited amount of information such as a ZIP code, date of service and similar items. As required by HIPAA, the parties entered into a "data use agreement." In exchange for the hospital providing the data, the data use agreement said that the hospital would receive a "non-exclusive, perpetual license" to use "Trained Models and Predictions" created by Google. This was considered a form of research.
In 2019, Google and the hospital were sued in federal court in a potential class-action lawsuit accusing the hospital of sharing hundreds of thousands of patient records that contained identifiable date stamps and doctors' notes. The plaintiff alleged a variety of claims, including a breach of the duty of medical confidentiality, breach of contract arising through the Notice of Privacy Practices and HIPAA Authorizations, invasion of privacy and contract injury based on alleged "sale" of protected health information (PHI) and a right to royalty payment for use of the PHI. The district court found insufficient allegation of injury arising from the claims and lack of standing to assert certain claims, the court granted the defendants' motion to dismiss. The plaintiff subsequently appealed, and recently a federal appeals panel again dismissed the claims based on a lack of standing and absence of injury.
The court ruled that merely possessing data or allegedly having the capability to re-identify the did not support the plaintiff’s claims. For a violation to occur, there must be bad intent or an actual bad act. This is significant when considering the way AI uses data for its learning process and the potential cumulative impact of all of data sources on the partially de-identified data being accessed and used by the AI.
As healthcare providers and tech companies continue to work together, privacy officers and providers must understand the specific details of how the AI interacts with patient information from all sources, including de-identified data from the providers and then analyze the facts to determine how HIPAA and other privacy rights apply and what specifically is required for compliance. Additionally, the Dinerstein decision underscores the importance of having a HIPAA-compliant Notice of Privacy Practices and authorization forms when required. Providers must give patients adequate explanations of what information is collected, how the information is processed and how it might be used.
All too often, covered entities either ignore the HIPAA obligation to provide Notices of Privacy Practices or they just copy the government's sample notice or some other organization's notice without really considering what it says. The government’s model notice says that the covered entity will never sell information unless the patient gives written permission. The lower court in Dinerstein v. Google LLC et al found that similar language was more stringent than HIPAA, and potentially prevented some disclosures permitted by HIPAA, such as an exchange of PHI for a reasonable cost based fee when the PHI is to be used for research. Although it is not feasible for a Notice to contain complete discussions of each and every HIPAA exception, as attorneys practicing in this area, we include a caveat that written authorization will be obtained for a sale of PHI unless the sale is otherwise permitted by HIPAA.
Business associate agreements routinely permit de-identification of PHI. Once de-identified. data uses are no longer regulated by HIPAA. In Dinerstein, the plaintiff alleged that the information was re-identified through Google’s independent collection of data and for that reason was not de-identified data. The courts did not accept this argument but it raises questions regarding when PHI is actually de-identified in accordance with HIPAA. Consideration should be given to a business associate’s request to de-identify data, the process and confirmation of de-identification that includes certification that the data may not be re-identified, and the purpose for which de-identification is being performed
The case raises interesting HIPAA regulatory issues that will need to be considered for any artificial intelligence project, including whether a particular endeavor triggers HIPAA's "research" provisions, whether data is properly de-identified, whether the right agreements are in place, whether a data transfer is a prohibited "sale" and what promises were made to patients about their data through the Notice of Privacy Practices or otherwise.
Beth Pitman is a partner in Holland & Knight's Birmingham, Alabama, office. Shannon Britton Hartsfield is a partner in the firm's Tallahassee, Florida, office.