Amara_Bello
Pathology AI Lead · Lagos University Hospital
Apr 2026
Both Hiroshi and Sophia have covered the technical side well. I want to flag something about stain variability that trips up every team deploying in a new lab environment: H&E staining varies significantly between labs, reagent batches, staining protocols, and slide scanner vendors (Aperio GT450 vs Hamamatsu NanoZoomer vs Leica Aperio VERSA all produce visually different outputs even from the same tissue block). Foundation models like UNI are more robust to this than older supervised models, but not immune. Stain normalization — specifically Macenko normalization or the more recent deep learning based StainNet — is still worth applying as a preprocessing step when deploying to a new site that wasn't represented in your training data. The staintools Python library implements Macenko and Vahadane normalization and is straightforward to apply before feeding patches into any model. I've seen performance drop by 8–12 percentage points on AUC when deploying without stain normalization to a site using a different scanner/staining protocol — it's not a theoretical problem.
Hiroshi_Nakamura ✓ Pathologist
Digital Pathology · NCC Japan
Mar 2026
Whole slide image (WSI) analysis in 2026 has been transformed by two things: gigapixel-scale foundation models trained on millions of pathology patches, and the maturation of the TCGA + CPTAC datasets as public training resources. The models worth knowing are UNI (from the Mahmood Lab at Harvard, trained on 100,000+ slides), CONCH (also Mahmood Lab, a vision-language model for pathology), and PLIP (from Huang et al., fine-tuned CLIP on pathology image-text pairs from Twitter of all places, but it works remarkably well). For tissue segmentation and cell detection, HoVer-Net remains the most cited open-source model for simultaneous nuclear segmentation and classification in H&E images — the pretrained weights on PanNuke and CoNSeP datasets are available on the original GitHub repo. The practical compute reality: WSIs are 40,000 × 40,000 pixels at 40x magnification, so all inference runs on patches (typically 256×256 at 20x), aggregated via multiple instance learning (MIL). QuPath, the open-source pathology image analysis platform at qupath.github.io, is the best environment for integrating these models into a pathologist's review workflow.