CorpusCallosum.segmentation.inference

CorpusCallosum.segmentation.inference.load_model(device=None)[source]

Load trained model from checkpoint.

Parameters:
devicetorch.device or None, optional

Device to load model to, by default None. If None, uses CUDA if available, else CPU.

Returns:
FastSurferVINN

Loaded and initialized model in evaluation mode.

CorpusCallosum.segmentation.inference.load_validation_data(path)[source]

Load validation data from CSV file and compute label widths.

Reads a CSV file containing image paths, label paths, and AC/PC coordinates, then computes the width (number of slices with non-zero labels) for each label file.

Parameters:
pathstr or Path

Path to the CSV file containing validation data. The CSV should have columns: image, label, AC_center_x, AC_center_y, AC_center_z, PC_center_x, PC_center_y, PC_center_z.

Returns:
imagesnpt.NDArray[str]

Array of image file paths.

ac_centersnpt.NDArray[float]

Array of anterior commissure coordinates (x, y, z).

pc_centersnpt.NDArray[float]

Array of posterior commissure coordinates (x, y, z).

label_widthsIterator[int]

Iterator yielding the number of slices with non-zero labels for each label file.

labelsnpt.NDArray[str]

Array of label file paths.

subj_idslist[str]

List of subject IDs (from CSV index).

CorpusCallosum.segmentation.inference.one_hot_to_label(one_hot, label_ids=None)[source]

Convert one-hot encoded segmentation to label map.

Converts a one-hot encoded segmentation array to discrete labels by taking the argmax along the last axis and optionally mapping to specific label values.

Parameters:
one_hotnp.ndarray of floats

One-hot encoded segmentation array of shape (…, num_classes).

label_idsarray_like of ints, optional

List of label IDs to map classes to. If None, defaults to [0, FORNIX_LABEL, CC_LABEL]. The index in this list corresponds to the class index from argmax.

Returns:
npt.NDArray[int]

Label map with discrete integer labels.

CorpusCallosum.segmentation.inference.run_inference(model, image_slice, ac_center, pc_center, voxel_size, device=None, transform=None)[source]

Run inference on a single image slice.

Parameters:
modeltorch.nn.Module

Trained model.

image_slicenp.ndarray

LIA-oriented input image as numpy array of shape (L, I, A).

ac_centernp.ndarray

Anterior commissure coordinates.

pc_centernp.ndarray

Posterior commissure coordinates.

voxel_sizea pair of floats

Voxel size of inferior/superior and anterior/posterior direction in mm.

devicetorch.device, optional

Device to run inference on. If None, uses the device of the model.

transformtransforms.Transform, optional

Custom transform pipeline.

Returns:
seg_labelsnpt.NDArray[int]

The segmentation result.

inputsnpt.NDArray[float]

The inputs to the model.

soft_labelsnpt.NDArray[float]

The softlabel output.

CorpusCallosum.segmentation.inference.run_inference_on_slice(model, test_slab, ac_center, pc_center, voxel_size)[source]

Run inference on a single slice.

Parameters:
modeltorch.nn.Module

Trained model for inference.

test_slabnp.ndarray

Input image slice.

ac_centernpt.NDArray[float]

Anterior commissure coordinates (Inferior and Anterior values).

pc_centernpt.NDArray[float]

Posterior commissure coordinates (Inferior and Posterior values).

voxel_sizea pair of floats

Voxel sizes in superior/inferior and anterior/posterior direction in mm.

Returns:
results: np.ndarray

Label map after one-hot conversion.

inputs: np.ndarray

Preprocessed input image.

outputs_soft: npt.NDArray[float]

Softlabel outputs (non-discrete).