CorpusCallosum.segmentation.inference¶
- CorpusCallosum.segmentation.inference.load_model(device=None)[source]¶
Load trained model from checkpoint.
- Parameters:
- device
torch.deviceorNone,optional Device to load model to, by default None. If None, uses CUDA if available, else CPU.
- device
- Returns:
FastSurferVINNLoaded and initialized model in evaluation mode.
- CorpusCallosum.segmentation.inference.load_validation_data(path)[source]¶
Load validation data from CSV file and compute label widths.
Reads a CSV file containing image paths, label paths, and AC/PC coordinates, then computes the width (number of slices with non-zero labels) for each label file.
- Parameters:
- Returns:
- images
npt.NDArray[str] Array of image file paths.
- ac_centers
npt.NDArray[float] Array of anterior commissure coordinates (x, y, z).
- pc_centers
npt.NDArray[float] Array of posterior commissure coordinates (x, y, z).
- label_widths
Iterator[int] Iterator yielding the number of slices with non-zero labels for each label file.
- labels
npt.NDArray[str] Array of label file paths.
- subj_ids
list[str] List of subject IDs (from CSV index).
- images
- CorpusCallosum.segmentation.inference.one_hot_to_label(one_hot, label_ids=None)[source]¶
Convert one-hot encoded segmentation to label map.
Converts a one-hot encoded segmentation array to discrete labels by taking the argmax along the last axis and optionally mapping to specific label values.
- Parameters:
- one_hot
np.ndarrayoffloats One-hot encoded segmentation array of shape (…, num_classes).
- label_idsarray_like
ofints,optional List of label IDs to map classes to. If None, defaults to [0, FORNIX_LABEL, CC_LABEL]. The index in this list corresponds to the class index from argmax.
- one_hot
- Returns:
npt.NDArray[int]Label map with discrete integer labels.
- CorpusCallosum.segmentation.inference.run_inference(model, image_slice, ac_center, pc_center, voxel_size, device=None, transform=None)[source]¶
Run inference on a single image slice.
- Parameters:
- model
torch.nn.Module Trained model.
- image_slice
np.ndarray LIA-oriented input image as numpy array of shape (L, I, A).
- ac_center
np.ndarray Anterior commissure coordinates.
- pc_center
np.ndarray Posterior commissure coordinates.
- voxel_size
apairoffloats Voxel size of inferior/superior and anterior/posterior direction in mm.
- device
torch.device,optional Device to run inference on. If None, uses the device of the model.
- transform
transforms.Transform,optional Custom transform pipeline.
- model
- Returns:
- CorpusCallosum.segmentation.inference.run_inference_on_slice(model, test_slab, ac_center, pc_center, voxel_size)[source]¶
Run inference on a single slice.
- Parameters:
- model
torch.nn.Module Trained model for inference.
- test_slab
np.ndarray Input image slice.
- ac_center
npt.NDArray[float] Anterior commissure coordinates (Inferior and Anterior values).
- pc_center
npt.NDArray[float] Posterior commissure coordinates (Inferior and Posterior values).
- voxel_size
apairoffloats Voxel sizes in superior/inferior and anterior/posterior direction in mm.
- model
- Returns:
- results:
np.ndarray Label map after one-hot conversion.
- inputs:
np.ndarray Preprocessed input image.
- outputs_soft:
npt.NDArray[float] Softlabel outputs (non-discrete).
- results: