Background Delivering care that is consistent with patient preferences is considered the outcome of successful advance care planning interventions. However, patient preferences are often difficult to ascertain within clinical notes, and thus difficult to extract and utilize in clinical or evaluative settings. The objective of this study is to show the efficiency and accuracy of two natural language processing (NLP) methods in identifying documentation within the free-text of clinical notes.
Methods Rule-based and machine learning NLP methods were developed and trained on a dataset of 449 clinical notes derived from Multi Parameter Intelligent Monitoring of Intensive Care (MIMIC) III database. Human annotators identified instances of code status limitation and patient care preference documentation in a second validation dataset of 192 clinical notes. We then assessed the performance of the rule-based and machine learning NLP mathods in identifying code status limitation and patient care preference documentation in the validation dataset.
Results Machine learning NLP identified documentation with a sensitivity ranging from 85.1–98.3% and a specificity ranging from 91.0%-97.0%. Performance of rule-based NLP was comparable, identifying documentation of code status limitation with a sensitivity of 98.3% and a specificity of 97.7% and patient care preferences with a sensitivity of 81.5% and a specificity of 83.0%.
Conclusions NLP methods are reliable tools for identifying information related to patient care preferences within clinical notes. Machine learning NLP may be better suited to identify documentation of conversations that vary in the way they are recorded, such as conversations related to goals of care.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.