ISBI 2026

LungEvaty: A Scalable, Open-Source Transformer-Based Deep Learning Model for Lung Cancer Risk Prediction in LDCT Screening

Johannes Brandt*, Maulik Chevli*, Rickmer Braren, Georgios Kaissis, Philip Müller, Daniel Rueckert

Technical University of Munich (TUM)  ·  Imperial College London  ·  MCML
* Shared first authorship   † Equal supervision

Abstract

Lung cancer risk estimation is gaining increasing importance as more countries introduce population-wide screening programs using low-dose CT (LDCT). As imaging volumes grow, scalable methods that can process entire lung volumes efficiently are essential. Existing approaches either over-rely on pixel-level annotations, limiting scalability, or analyze the lung in fragments, weakening performance. We present LungEvaty, a fully Transformer-based framework for predicting 1–6 year lung cancer risk from a single LDCT scan. The model operates on whole-lung inputs, learning directly from large-scale screening data to capture comprehensive anatomical and pathological cues relevant for malignancy risk. Using only imaging data and no region supervision, LungEvaty matches state-of-the-art performance, refinable by our optional Anatomically Informed Attention Guidance (AIAG) loss, which encourages anatomically focused attention. In total, LungEvaty was trained on more than 90,000 CT scans, including over 28,000 for fine-tuning and 6,000 for evaluation. Our framework offers a simple, data-efficient, and fully open-source solution that provides an extensible foundation for future research in longitudinal and multimodal lung cancer risk prediction.

Key Contributions

🏗️

Whole-Lung Transformer

Fully Transformer-based architecture processing entire lung volumes — no fragmentation, full anatomical context.

🎯

AIAG Loss

Optional Anatomically Informed Attention Guidance — benefits from expert annotations when available, strong without them.

📈

State-of-the-Art

Matches or exceeds SOTA with single modality, single task, and 60% of prior pretraining data volume.

🔓

Fully Open Source

Code and weights publicly available — enabling reproducible research and community-driven extensions.

Architecture

LungEvaty Architecture

LungEvaty* architecture. A Primus (EVA-02) Transformer encoder, pretrained via masked autoencoding on 91K NLST scans, processes the whole lung. A CLS token captures global context, a max-pooled token summarizes encoder outputs, and a learnable MHA query token attends to local features. All three are concatenated and fed into a Cumulative Hazard Layer for 1–6 year risk prediction. The optional AIAG loss aligns attention weights with expert annotations when available.

Results

ROC-AUC (Years 1–6) and C-Index on the Sybil and M3FM test splits. Bold = best, underline = second best.

ModelTrainingY1Y2Y3Y4Y5Y6C-Idx
Sybil Test Split
SybilSM-ST0.9270.8440.7920.7690.7550.7500.749
Sybil 1.5SM-ST0.9440.8830.8310.8030.7790.7740.772
M3FMMM-MT0.8920.8450.8030.7820.7700.7690.762
LungEvaty (w/ AIAG)SM-ST0.9280.8930.8480.8190.8080.8050.800
LungEvaty (no annot.)SM-ST0.9230.8870.8440.8200.8090.8090.802
LungEvaty* (w/ AIAG)SM-ST0.9500.9050.8590.8250.8110.8060.803
M3FM Test Split
SybilSM-ST0.9430.8800.8470.8500.8490.8470.844
M3FM HugeMM-MT0.9400.8880.8600.8600.8390.823
LungEvaty (w/ AIAG)SM-ST0.9470.9120.8980.8840.8770.8760.872
LungEvaty (no annot.)SM-ST0.9390.9010.8910.8830.8780.8780.873

PR-AUC on Sybil split — a more clinically relevant metric for imbalanced screening data.

ModelY1Y2Y3Y4Y5Y6
Sybil0.3570.2850.2510.2350.2440.292
Sybil 1.50.3740.3250.2920.2720.2700.317
M3FM0.2120.1790.1620.1510.1600.220
LungEvaty0.4440.3640.3230.2970.2990.361
LungEvaty*0.4510.3580.3140.2870.2900.347

* LungEvaty* is not reported in the paper. The only architectural change is the addition of a max-pooled token, concatenated alongside the CLS token and attention-pooled token before the Cumulative Hazard Layer.

Citation

@article{brandt2025lungevaty,
  title={LungEvaty: A Scalable, Open-Source Transformer-Based
         Deep Learning Model for Lung Cancer Risk Prediction
         in LDCT Screening},
  author={Brandt, Johannes and Chevli, Maulik and Braren, Rickmer
          and Kaissis, Georgios and M{\"u}ller, Philip
          and Rueckert, Daniel},
  journal={arXiv preprint arXiv:2511.20116},
  year={2025}
}