notesum.ai
Published at October 18Less is More: Selective Reduction of CT Data for Self-Supervised Pre-Training of Deep Learning Models with Contrastive Learning Improves Downstream Classification Performance
cs.CL
cs.AI
cs.LG
Released Date: October 18, 2024
Authors: Daniel Wolf1, Tristan Payer1, Catharina Silvia Lisson2, Christoph Gerhard Lisson2, Meinrad Beer2, Michael Götz, Timo Ropinski1
Aff.: 1Visual Computing Research Group, Institute of Media Informatics, Ulm University; 2Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center
| Pre-Training | Downstream Results | ||||||
|---|---|---|---|---|---|---|---|
| Dataset | Method | COVID-19 | OrgMNIST | Brain | |||
| AUC | F1 | AUC | F1 | AUC | F1 | ||
| PET-CT | ALL | 0.775 0.009 | 0.719 0.010 | 0.968 0.003 | 0.752 0.003 | 0.727 0.042 | 0.534 0.073 |
| EveryN 20 % | 0.801 0.006 | 0.735 0.009 | 0.972 0.003 | 0.782 0.003 | 0.781 0.035 | 0.665 0.070 | |
| EveryN 10 % | 0.810 0.007 | 0.740 0.016 | 0.973 0.002 | 0.793 0.002 | 0.798 0.031 | 0.674 0.074 | |
| LIDC | ALL | 0.807 0.006 | 0.744 0.013 | 0.972 0.003 | 0.769 0.003 | 0.734 0.046 | 0.609 0.072 |
| EveryN 20 % | 0.810 0.004 | 0.751 0.010 | 0.977 0.005 | 0.792 0.003 | 0.739 0.044 | 0.610 0.046 | |
| EveryN 10 % | 0.812 0.006 | 0.756 0.010 | 0.979 0.002 | 0.800 0.003 | 0.740 0.041 | 0.614 0.046 | |