Oh no! Where's the JavaScript?
Your Web browser does not have JavaScript enabled or does not support JavaScript. Please enable JavaScript on your Web browser to properly view this Web site, or upgrade to a Web browser that does support JavaScript.

AI pathology: whole slide image analysis tools worth using in 2026

Last updated on 7 hours ago
A
admin2Member
Posted 7 hours ago
Amara_Bello
Pathology AI Lead · Lagos University Hospital
Apr 2026
Both Hiroshi and Sophia have covered the technical side well. I want to flag something about stain variability that trips up every team deploying in a new lab environment: H&E staining varies significantly between labs, reagent batches, staining protocols, and slide scanner vendors (Aperio GT450 vs Hamamatsu NanoZoomer vs Leica Aperio VERSA all produce visually different outputs even from the same tissue block). Foundation models like UNI are more robust to this than older supervised models, but not immune. Stain normalization — specifically Macenko normalization or the more recent deep learning based StainNet — is still worth applying as a preprocessing step when deploying to a new site that wasn't represented in your training data. The staintools Python library implements Macenko and Vahadane normalization and is straightforward to apply before feeding patches into any model. I've seen performance drop by 8–12 percentage points on AUC when deploying without stain normalization to a site using a different scanner/staining protocol — it's not a theoretical problem.
A
admin2Member
Posted 7 hours ago
Sophia_Chen
Computational Pathology · MSKCC
Mar 2026
Adding the practical workflow code because Hiroshi's description is accurate but the implementation gap is real — processing a single WSI correctly takes more than most tutorials show. Here's a working patch extraction pipeline using OpenSlide that handles magnification levels properly:

# Python - WSI patch extraction with OpenSlide
import openslide
import numpy as np
from PIL import Image

wsi = openslide.OpenSlide("tumor_slide.svs")

# Check available magnifications
obj_power = int(wsi.properties.get(
 openslide.PROPERTY_NAME_OBJECTIVE_POWER, 40))
target_mag = 20
downsample = obj_power / target_mag

# Find the closest level in the pyramid
level = wsi.get_best_level_for_downsample(downsample)
level_downsample = wsi.level_downsamples[level]

patch_size = 256
patches = []
W, H = wsi.dimensions # full resolution

for y in range(0, H, int(patch_size * downsample)):
 for x in range(0, W, int(patch_size * downsample)):
 patch = wsi.read_region(
 (x, y), level,
 (patch_size, patch_size)
 ).convert("RGB")
 arr = np.array(patch)
 # Tissue detection: skip near-white background
 if arr.mean() < 230:
 patches.append((x, y, arr))

print(f"Extracted {len(patches)} tissue patches")
The tissue detection step (skipping near-white background) is critical — a typical 40x lung adenocarcinoma slide has ~60–70% background and without filtering you're wasting GPU compute on glass. More sophisticated tissue detection uses Otsu thresholding on the thumbnail, which is what most production pipelines use. The CLAM framework from the Mahmood Lab has a complete tissue detection + feature extraction + MIL aggregation pipeline and is the current community standard for WSI classification.
A
admin2Member
Posted 7 hours ago
Hiroshi_Nakamura ✓ Pathologist
Digital Pathology · NCC Japan
Mar 2026
Whole slide image (WSI) analysis in 2026 has been transformed by two things: gigapixel-scale foundation models trained on millions of pathology patches, and the maturation of the TCGA + CPTAC datasets as public training resources. The models worth knowing are UNI (from the Mahmood Lab at Harvard, trained on 100,000+ slides), CONCH (also Mahmood Lab, a vision-language model for pathology), and PLIP (from Huang et al., fine-tuned CLIP on pathology image-text pairs from Twitter of all places, but it works remarkably well). For tissue segmentation and cell detection, HoVer-Net remains the most cited open-source model for simultaneous nuclear segmentation and classification in H&E images — the pretrained weights on PanNuke and CoNSeP datasets are available on the original GitHub repo. The practical compute reality: WSIs are 40,000 × 40,000 pixels at 40x magnification, so all inference runs on patches (typically 256×256 at 20x), aggregated via multiple instance learning (MIL). QuPath, the open-source pathology image analysis platform at qupath.github.io, is the best environment for integrating these models into a pathologist's review workflow.
You can view all discussion threads in this forum.
You cannot start a new discussion thread in this forum.
You cannot reply in this discussion thread.
You cannot start on a poll in this forum.
You cannot upload attachments in this forum.
You cannot download attachments in this forum.
Sign In
Not a member yet? Click here to register.
Forgot Password?
Users Online Now
Guests Online 5
Members Online 0

Total Members: 40
Newest Member: Remax14