A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text?
- Code: DLILP
- Paper: IPMI 2025 - ArXiv
- Docs: Documentation
- Tutorial: Notebook
About "CONVIRT" weights:
- Pre-trained using a vanilla CLIP contrastive loss - a very similar pre-training as earlier proposed in CONVIRT paper (2020).
- Pre-trained on MIMIC.
If you find this repository useful, please consider citing this paper:
@inproceedings{convirt,
author = {Yuhao Zhang and others},
booktitle = {MHLC},
pages = {1-24},
title = {Contrastive Learning of Medical Visual Representations from Paired Images and Text},
year = {2022},
}
@inproceedings{dlilp,
title={A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text?},
author={Julio Silva-Rodríguez and Jose Dolz and Ismail {Ben Ayed}},
booktitle={Information Processing in Medical Imaging (IPMI)},
year={2025}
}
- Downloads last month
- 44
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support