Skip to content

yumibriones/HPL-Modified

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HPL-Modified

Modifications to the Histomorphological Phenotype Learning pipeline.

Original HPL paper by Quiros et al. is here: https://www.nature.com/articles/s41467-024-48666-7.

Authors

Yumi Briones - yb2612@nyu.edu, Yumi.Briones@nyulangone.org
Jennifer Motter - mottej02@nyu.edu, Jennifer.Motter@nyulangone.org
Alyssa Pradhan - amp10295@nyu.edu, Alyssa.Pradhan@nyulangone.org

Repo structure

  • docs - documentation
  • scripts - scripts for automation (e.g., bash scripts)
  • src - source code
  • notebooks - Jupyter notebooks

About the data

All data is from https://github.com/AdalbertoCq/Histomorphological-Phenotype-Learning.

  1. For initial training, we used a 250k subsample of LUAD and LUSC samples: LUAD & LUSC 250K subsample
  2. For complete train, validation, and test sets, we used: LUAD & LUSC datasets
  3. To get original HPL tile embeddings, we used: LUAD & LUSC tile vector representations
  4. To get the original HPL-HPC assignments, we used: LUAD vs LUSC type classification and HPC assignments

Modifications

HPL-CLIP

Point person: Yumi Briones

https://arxiv.org/pdf/2103.00020

Tutorial: https://github.com/yumibriones/HPL-Modified/blob/main/docs/HPL-CLIP_tutorial.md

To enable multimodal learning, we integrated Contrastive Language-Image Pre-Training (CLIP) by OpenAI (open_clip implementation) into the HPL pipeline.

image

Briefly, we first generated text captions for each image incorporating information on age, gender, and smoking status. We then trained a CLIP model on these image-text pairs using the ViT-B-32 architecture. Finally, we generated HPCs from the image embeddings generated by CLIP following the HPL pipeline (i.e., Leiden clustering).

HPL-VICReg

Point person: Jennifer Motter

https://arxiv.org/pdf/2105.04906

We changed the self-supervised learning (SSL) method of HPL from Barlow Twins to Variance-Invariance-Covariance Regularization (VICReg).

HPL-ViT

Point person: Alyssa Pradhan

https://arxiv.org/pdf/2010.11929

We replaced the convolutional neural network (CNN) backbone of HPL to a vision transformer (ViT).

Releases

No releases published

Packages

No packages published

Languages