CerebNet.datasets.utils¶
- class CerebNet.datasets.utils.LTADict[source]¶
Methods
clear
()copy
()fromkeys
(iterable[, value])Create a new dictionary with keys from iterable and values set to value.
get
(key[, default])Return the value for key if key is in the dictionary, else default.
items
()keys
()pop
(key[, default])If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem
(/)Remove and return a (key, value) pair as a 2-tuple.
setdefault
(key[, default])Insert key with a value of default if key is not in the dictionary.
update
([E, ]**F)If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values
()
- CerebNet.datasets.utils.bounding_volume_offset(img, target_img_size, image_shape=None)[source]¶
Find the center of the non-zero values in img and returns offsets so this center is in the center of a bounding volume of size target_img_size.
- CerebNet.datasets.utils.crop_transform(image, offsets=None, target_shape=None, out=None, pad=0)[source]¶
Perform a crop transform of the last N dimensions on the image data. Cropping does not interpolate the image, but “just removes” border pixels/voxels. Negative offsets lead to padding.
- Parameters:
- image
np.ndarray
,torch.Tensor
Image of size […, D_1, D_2, …, D_N], where D_1, D_2, …, D_N are the N image dimensions.
- offsets
Sequence
[int
],optional
Offset of the cropped region for the last N dimensions (default: center crop with less crop/pad towards index 0).
- target_shape
Sequence
[int
],optional
If defined, target_shape specifies the target shape of the “cropped region”, else the crop will be centered cropping offset[dim] voxels on each side (then the shape is derived by subtracting 2x the dimension-specific offset). target_shape should have the same number of elements as offsets. May be implicitly defined by out.
- out
np.ndarray
,torch.Tensor
,optional
Array to store the cropped image in (optional), can be a view on image for memory-efficiency.
- pad
int
,str
, default=0/zero-pad Padding strategy to use when padding is required, if int, pad with that value.
- image
- Returns:
- out
np.ndarray
,torch.Tensor
The image (stack) cropped in the last N dimensions by offsets to the shape target_shape, or if target_shape is not given image.shape[i+2] - 2*offset[i].
- out
- Raises:
ValueError
If neither offsets nor target_shape nor out are defined.
ValueError
If out is not target_shape.
TypeError
If the type of image is not an np.ndarray or a torch.Tensor.
RuntimeError
If the dimensionality of image, out, offset or target_shape is invalid or inconsistent.
See also
numpy.pad
For additional information refer to numpy.pad function.
Notes
Either offsets, target_shape or out must be defined.
- CerebNet.datasets.utils.filter_blank_slices_thick(data_dict, threshold=10)[source]¶
Function to filter blank slices from the volume using the label volume :param dict data_dict: dictionary containing all volumes need to be filtered :return:
- CerebNet.datasets.utils.map_label2subseg(mapped_subseg, label_type='cereb_subseg')[source]¶
Function to perform look-up table mapping from label space to subseg space