From a $2 Slide to a $2,000 Scan: How Microsoft’s AI Is Rewriting Cancer Diagnostics
Summary
Microsoft’s new AI can look at a cheap, routine cancer slide and see what only a $2,000 immune scan could reveal — and it just proved it on 14,000 patients across 51 hospitals. The era of affordable, deep tumor profiling may have just arrived.
A team from Microsoft Research, the University of Washington, and Providence Health has built an AI system called GigaTIME that can look at a standard, low-cost cancer tissue slide and predict what an expensive, high-tech immune scan would show. The study, published in Cell in January 2026, covered more than 14,000 patients across 24 cancer types, making it the largest effort of its kind.
The problem is straightforward. Doctors want to understand how a patient’s immune system interacts with their tumor, because that determines whether treatments like immunotherapy will work. The gold-standard tool for this, multiplex immunofluorescence, lights up 21 different immune proteins on a single tissue sample. But it costs too much and takes too long to use at scale. GigaTIME learned from 40 million cells where both cheap and expensive scans existed side by side, then applied that knowledge to predict the expensive scan from the cheap one alone.
Across 51 hospitals in seven U.S. states, the system generated nearly 300,000 virtual immune scans and found over 1,200 meaningful links between immune proteins and clinical markers like tumor mutations and treatment-relevant biomarkers. A combined score from all 21 proteins predicted patient survival better than any single protein, suggesting that the immune response works as a coordinated system rather than through isolated signals.
When tested independently on 10,200 patients from the Cancer Genome Atlas, results tracked closely with the original findings, reaching a 0.88 correlation. The real-world Providence dataset still uncovered a third more discoveries than TCGA, highlighting that diverse hospital data outperforms curated research collections for clinical insight.
Not every protein translated equally well. Nuclear markers were highly accurate, while surface and cytoplasmic proteins proved harder to infer from tissue shape alone, a reminder that AI cannot extract signals that morphology does not encode.
Microsoft has released the model and training data publicly, with plans to add cell-level interaction modeling in future versions.



