» Articles » PMID: 33748691

Exploiting Hierarchy in Medical Concept Embedding

Overview
Journal JAMIA Open
Date 2021 Mar 22
PMID 33748691
Citations 5
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: To construct and publicly release a set of medical concept embeddings for codes following the ICD-10 coding standard which explicitly incorporate hierarchical information from medical codes into the embedding formulation.

Materials And Methods: We trained concept embeddings using several new extensions to the Word2Vec algorithm using a dataset of approximately 600,000 patients from a major integrated healthcare organization in the Mid-Atlantic US. Our concept embeddings included additional entities to account for the medical categories assigned to codes by the Clinical Classification Software Revised (CCSR) dataset. We compare these results to sets of publicly released pretrained embeddings and alternative training methodologies.

Results: We found that Word2Vec models which included hierarchical data outperformed ordinary Word2Vec alternatives on tasks which compared naïve clusters to canonical ones provided by CCSR. Our Skip-Gram model with both codes and categories achieved 61.4% normalized mutual information with canonical labels in comparison to 57.5% with traditional Skip-Gram. In models operating on two different outcomes, we found that including hierarchical embedding data improved classification performance 96.2% of the time. When controlling for all other variables, we found that co-training embeddings improved classification performance 66.7% of the time. We found that all models outperformed our competitive benchmarks.

Discussion: We found significant evidence that our proposed algorithms can express the hierarchical structure of medical codes more fully than ordinary Word2Vec models, and that this improvement carries forward into classification tasks. As part of this publication, we have released several sets of pretrained medical concept embeddings using the ICD-10 standard which significantly outperform other well-known pretrained vectors on our tested outcomes.

Citing Articles

Large language models improve transferability of electronic health record-based predictions across countries and coding systems.

Kirchler M, Ferro M, Lorenzini V, Lippert C, Ganna A medRxiv. 2025; .

PMID: 39974125 PMC: 11838679. DOI: 10.1101/2025.02.03.25321597.


Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine.

Johnson R, Gottlieb U, Shaham G, Eisen L, Waxman J, Devons-Sberro S medRxiv. 2024; .

PMID: 39677476 PMC: 11643188. DOI: 10.1101/2024.12.03.24318322.


A novel, machine-learning model for prediction of short-term ASCVD risk over 90 and 365 days.

Gazit T, Mann H, Gaber S, Adamenko P, Pariente G, Volsky L Front Digit Health. 2024; 6:1485508.

PMID: 39552935 PMC: 11564171. DOI: 10.3389/fdgth.2024.1485508.


Longitudinal Multimodal Transformer Integrating Imaging and Latent Clinical Signatures From Routine EHRs for Pulmonary Nodule Classification.

Li T, Still J, Xu K, Lee H, Cai L, Krishnan A Med Image Comput Comput Assist Interv. 2024; 14221:649-659.

PMID: 38779102 PMC: 11110542. DOI: 10.1007/978-3-031-43895-0_61.


Hypergraph Transformers for EHR-based Clinical Predictions.

Xu R, Ali M, Ho J, Yang C AMIA Jt Summits Transl Sci Proc. 2023; 2023:582-591.

PMID: 37350881 PMC: 10283128.


References
1.
Beaulieu-Jones B, Greene C . Semi-supervised learning of the electronic health record for phenotype stratification. J Biomed Inform. 2016; 64:168-178. DOI: 10.1016/j.jbi.2016.10.007. View

2.
Beaulieu-Jones B, Kohane I, Beam A . Learning Contextual Hierarchical Structure of Medical Concepts with Poincairé Embeddings to Clarify Phenotypes. Pac Symp Biocomput. 2019; 24:8-17. PMC: 6417814. View

3.
Xiang Y, Xu J, Si Y, Li Z, Rasmy L, Zhou Y . Time-sensitive clinical concept embeddings learned from large electronic health records. BMC Med Inform Decis Mak. 2019; 19(Suppl 2):58. PMC: 6454598. DOI: 10.1186/s12911-019-0766-3. View