Uppsala Reports editor
Case narratives can be a valuable source of insight about adverse drug reactions, potentially adding details that are less readily available – or not available at all – from the more structured, standardised data fields of individual case safety reports. But sharing case narratives raises legal and ethical duties to protect the confidentiality of patient’s private details.
Currently, removing personal identifiers from case narratives is a manual task that can be both time consuming and tedious. However, new work from UMC suggests there may be a better way.
At the Data Innovation Summit 2019, held in Stockholm in March, UMC data scientist Eva-Lisa Meldau discussed the research into automatically de-identifying case narratives to protect patient privacy.
The UMC researchers trained neural networks with more than 500 medical records, developing a deep learning algorithm that “read” the narratives in multiple ways to predict identifying data, assisted by standard natural language processing tools and manually constructed rules and dictionary queries.
“We developed the system to be conservative in a way that it only retains words in the text if it is highly confident that they are not personal identifiers,” said Meldau.
The results have been encouraging, suggesting that the algorithm could be trained to perform as well as or possibly better than a human annotator, albeit at the expense of removing more non-personal text.
The algorithm has so far only been developed using medical records from a de-identification challenge. “As we fine-tune the algorithm with annotated original narratives, we expect the performance to improve even further,” said Meldau.