Privacy Analytics > Resources > White Papers > Safe Harbor Versus The Statistical Method
Safe Harbor Versus The Statistical Method
To leverage protected health information (PHI) for secondary purposes, an understanding of the different de-identification mechanisms is required. Under the U.S. Health Insurance Portability and Accountability Act (HIPAA), there are two methods for de-identification: Safe Harbor and the Statistical Method (otherwise known as Expert Determination). While both are under HIPAA’s privacy rule, they are not the same.
Safe Harbor offers a rules-based approach with a prescriptive list of patient identifiers that need to be masked for the data to be considered compliant. The Statistical Method requires an expert, a statistician, familiar with the properties of the data, perform techniques to transform identifiers.
This white paper discusses what each of these methods entails regarding protecting your organization and how they can enable better data for analytics, research or monetization.
Situation: California’s Consumer Privacy Act inspired Comcast to evolve the way in which they protect the privacy of customers who consent to share personal information with them.
Situation: Integrate.ai’s AI-powered tech helps clients improve their online experience by sharing signals about website visitor intent. They wanted to ensure privacy remained fully protected within the machine learning / AI context that produces these signals.
Situation: Novartis’ digital transformation in drug R&D drives their need to maximize value from vast stores of clinical study data for critical internal research enabled by their data42 platform.
Situation: CancerLinQ™, a subsidiary of American Society of Clinical Oncology, is a rapid learning healthcare system that helps oncologists aggregate and analyze data on cancer patients to improve care. To achieve this goal, they must de-identify patient data provided by subscribing practices across the U.S.
Situation: Needed to ensure the primary market research process was fully compliant with internal policies and regulations such as GDPR.
Situation: Needed to enable AI-driven product innovation with a defensible governance program for the safe and responsible use
of voice-to-text data under Shrems II.
This course runs on the 2nd Wednesday of every month, at 11 a.m. ET (45 mins). Click the button to register and select the date that works best for you.