CerebNet.datasets.utils

CerebNet.datasets.utils.bounding_volume_offset(img, target_img_size, image_shape=None)[source]

Find the center of the non-zero values in img and returns offsets so this center is in the center of a bounding volume of size target_img_size.

CerebNet.datasets.utils.crop_transform(image, offsets=None, target_shape=None, out=None, pad=0)[source]

Perform a crop transform of the last N dimensions on the image data. Cropping does not interpolate the image, but “just removes” border pixels/voxels. Negative offsets lead to padding.

Parameters:
imagenp.ndarray

Image of size […, D_1, D_2, …, D_N], where D_1, D_2, …, D_N are the N image dimensions.

offsetsSequence[int]

Offset of the cropped region for the last N dimensions (default: center crop with less crop/pad towards index 0).

target_shapeSequence[int], optional

If defined, target_shape specifies the target shape of the “cropped region”, else the crop will be centered cropping offset[dim] voxels on each side (then the shape is derived by subtracting 2x the dimension-specific offset). target_shape should have the same number of elements as offsets. May be implicitly defined by out.

outnp.ndarray, optional

Array to store the cropped image in (optional), can be a view on image for memory-efficiency.

padint, str, default=0

Padding strategy to use when padding is required, if int, pad with that value (default: zero-pad).

Returns:
outnp.ndarray

The image (stack) cropped in the last N dimensions by offsets to the shape target_shape, or if target_shape is not given image.shape[i+2] - 2*offset[i].

Raises:
ValueError

If neither offsets nor target_shape nor out are defined.

ValueError

If out is not target_shape.

TypeError

If the type of image is not an np.ndarray or a torch.Tensor.

RuntimeError

If the dimensionality of image, out, offset or target_shape is invalid or inconsistent.

See also

numpy.pad

For additional information refer to numpy.pad function.

Notes

Either offsets, target_shape or out must be defined.

CerebNet.datasets.utils.filter_blank_slices_thick(data_dict, threshold=10)[source]

Function to filter blank slices from the volume using the label volume :param dict data_dict: dictionary containing all volumes need to be filtered :return:

CerebNet.datasets.utils.map_label2subseg(mapped_subseg, label_type='cereb_subseg')[source]

Function to perform look-up table mapping from label space to subseg space

CerebNet.datasets.utils.map_size(arr, base_shape, return_border=False)[source]

Resize the image to base_shape.