HypVINN.data_loader.dataset

class HypVINN.data_loader.dataset.HypVINNDataset(subject_name, modalities, orig_zoom, cfg, mode='t1t2', transforms=None)[source]

Class to load MRI-Image and process it to correct format for HypVINN network inference.

The HypVINN Dataset passed during Inference the input images,the scale factor for the VINN layer and a weight factor (wT1,wT2). The Weight factor determines the running mode of the HypVINN model. if wT1 =1 and wT2 =0. The HypVINN model will only allow the flow of the T1 information (mode = t1). if wT1 =0 and wT2 =1. The HypVINN model will only allow the flow of the T2 information (mode = t2). if wT1 !=1 and wT2 !=1. The HypVINN model will automatically weigh the T1 information and the T2 information based on the learned modality weights (mode = t1t2).

Methods

_standarized_img(orig_data: np.ndarray, orig_zoom: npt.NDArray[float], modality: np.ndarray) -> np.ndarray

Standardize the image based on the original data, original zoom, and modality.

_get_scale_factor() -> npt.NDArray[float]

Get the scaling factor to match the original resolution of the input image to the final resolution of the FastSurfer base network.

__getitem__(index: int) -> dict[str, torch.Tensor | np.ndarray]

Retrieve the image, scale factor, and weight factor for a given index.

__len__()

Return the number of images in the dataset.