Strengthening Secondary Use
Data registries are an invaluable tool that require the secondary use of data for conducting evidence-based healthcare research. Concern for patient confidentiality has always been a factor when it comes to sharing data. Currently, the growing prevalence of data linking between repositories of patient information is heightening risks to privacy and driving concern to a new level.
In order to provide comprehensive, detailed data on specific patient populations, disease registries will link patient information from electronic medical records (EMRs) with claims data and other administrative files. While these linkages provide a more complete picture of the patient experience, they also associate a greater number of direct and indirect identifiers with an individual patient record. The greater the amount of data that is available on an individual, the greater the chance that they could be re-identified.
Anonymizing patient data to remove protected health information (PHI) is essential before information from a disease registry can be disclosed. However, when data is being shared for research and analysis it is also important that it retain its analytic utility. Statistical de-identification is the only method that allows data quality to be maintained while ensuring that the data is truly anonymized. Although other methods exist to remove PHI, statistical de-identification is the optimal method to anonymize patient data, particularly when it is to be shared for secondary use.
In discussing how registries can responsibly share their data with researchers, this white paper will review the option of patient consent versus de-identification, and explain how statistical de-identification can provide researchers with the highest-possible quality data for their needs.
Situation: California’s Consumer Privacy Act inspired Comcast to evolve the way in which they protect the privacy of customers who consent to share personal information with them.
Situation: Integrate.ai’s AI-powered tech helps clients improve their online experience by sharing signals about website visitor intent. They wanted to ensure privacy remained fully protected within the machine learning / AI context that produces these signals.
Situation: Novartis’ digital transformation in drug R&D drives their need to maximize value from vast stores of clinical study data for critical internal research enabled by their data42 platform.
Situation: CancerLinQ™, a subsidiary of American Society of Clinical Oncology, is a rapid learning healthcare system that helps oncologists aggregate and analyze data on cancer patients to improve care. To achieve this goal, they must de-identify patient data provided by subscribing practices across the U.S.
Situation: Needed to ensure the primary market research process was fully compliant with internal policies and regulations such as GDPR.
Situation: Needed to enable AI-driven product innovation with a defensible governance program for the safe and responsible use
of voice-to-text data under Shrems II.
This course runs on the 2nd Wednesday of every month, at 11 a.m. ET (45 mins). Click the button to register and select the date that works best for you.