FastSurferCNN.data_loader.conform

class FastSurferCNN.data_loader.conform.Reorientation(source_affine, source_shape, tol=1e-06)[source]

A class to organize data reorientation to canonical orientations.

Attributes

source_affine

Returns a readonly view of the source affine matrix.

vox2vox

Returns a readonly view of the target2source vox2vox transformation matrix.

source_shape

(Shape1d) The shape of the input image.

target_shape

(Shape1d) The shape of the output image.

tol

(float) The threshold to check to determine identity or reordering.

Methods

__call__(image_data[, order, vox_eps, rot_eps])

Reorder and flip image_data such that the data is according to the source_affine and vox2vox attributes.

from_target_affine(source_affine, ...[, ...])

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

from_target_orientation(source_affine, ...)

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

from_vox2vox(source_affine, vox2vox, shape)

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

is_identity()

Whether the internal vox2vox is the identity.

reorder_axes(vector)

Reorder a vector according to the vox2vox of this Reorientation.

snap_translation_to_grid_()

Modifies the translation to snap to the grid, if no rotation or scaling is present.

classmethod from_target_affine(source_affine, target_affine, shape, target_shape=None, tol=1e-06)[source]

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

The resulting transform is a vox2vox from source to target.

Parameters:
source_affineAffineMatrix4x4

The input image affine to detect the reorientation operations.

target_affineAffineMatrix4x4, AffineMatrix3x3

The target affine to reorient to.

shapearray_like of shape (3,)

The source shape of the data to reorder. If a “wrong shape” is passed, the vox2vox offset will corrupt.

target_shapearray_like of shape (3,), optional

The target shape in native coordinates, defaults to shape.

tolfloat, default=1e-6

Tolerance to identify reordering.

Returns:
Reorientation

An object holding the source_affine and the vox2vox transform to reorient data from source_affine to target_orientation.

classmethod from_target_orientation(source_affine, target_orientation, shape, target_vox_size=None, target_shape=None, tol=1e-06)[source]

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

The resulting transform is a vox2vox from source to target.

Parameters:
source_affineAffineMatrix4x4

The input image affine to detect the reorientation operations.

target_orientationOrientationType

The target orientation to reorient to.

shapearray_like of shape (3,)

The source shape of the data to reorder. If a “wrong shape” is passed, the vox2vox offset will corrupt.

target_vox_sizearray_like of shape (3,), optional

The target voxel size in native coordinates, defaults to source_affine.

target_shapearray_like of shape (3,), optional

The target shape in native coordinates, defaults to shape.

tolfloat, default=1e-6

Tolerance to identify reordering.

Returns:
Reorientation

An object holding the source_affine and the vox2vox transform to reorient data from source_affine to target_orientation.

classmethod from_vox2vox(source_affine, vox2vox, shape, target_shape=None, tol=1e-06)[source]

Determine the affine matrix to reorder and flip/interpolate data from source_affine to orientation.

The resulting transform is a vox2vox from source to target.

Parameters:
source_affineAffineMatrix4x4

The input image affine to detect the reorientation operations.

vox2voxAffineMatrix4x4, AffineMatrix3x3

The out2in vox2vox matrix to use, for a 3x3 matrix compute translation by assuming a rotation around the center (this is consistent with vox2vox of scipy.ndimage.affine_transform, apply_image and nibabel.orientations.aff2axcodes).

shapearray_like of shape (3,)

The source shape of the data to reorder. If a “wrong shape” is passed, the vox2vox offset will corrupt.

target_shapearray_like of shape (3,), optional

The target shape in native coordinates, defaults to shape.

tolfloat, default=1e-6

Tolerance to identify reordering.

Returns:
Reorientation

An object holding the source_affine and the vox2vox transform to reorient data from source_affine to target_orientation.

See also

FastSurferCNN.data_loader.conform.apply_vox2vox

Apply a vox2vox matrix to a 3D image.

scipy.ndimage.affine_transform

Apply an affine transform to data.

nibabel.orientations.aff2axcodes

Generate Orientation Codes from an affine matrix.

is_identity()[source]

Whether the internal vox2vox is the identity.

reorder_axes(vector)[source]

Reorder a vector according to the vox2vox of this Reorientation.

Parameters:
vectornp.ndarray of shape (3,)

The vector to reorder.

Returns:
ndarray of shape (3,)

Reordered vector.

snap_translation_to_grid_()[source]

Modifies the translation to snap to the grid, if no rotation or scaling is present.

property inverse[source]

A Reorientation object that can be used to reverse the reorientation of this object.

property source_affine[source]

Returns a readonly view of the source affine matrix.

property target_affine[source]

The target affine after reorientation.

property vox2vox[source]

Returns a readonly view of the target2source vox2vox transformation matrix.

FastSurferCNN.data_loader.conform.apply_orientation(arr, ornt)[source]

Apply transformations implied by ornt to the first n axes of the array arr.

Parameters:
arrarray_like or torch Tensor of data with ndim >= n

The image/data to reorient.

ornt(n,2) orientation array

Orientation transform. ornt[N,1]` is flip of axis N of the array implied by `shape`, where 1 means no flip and -1 means flip. For example, if ``N==0 and ornt[0,1] == -1, and there’s an array arr of shape shape, the flip would correspond to the effect of np.flipud(arr). ornt[:,0] is the transpose that needs to be done to the implied array, as in arr.transpose(ornt[:,0]).

Returns:
t_arrndarray or Tensor

The data array arr transformed according to ornt.

See also

nibabel.orientations.apply_orientation

This function is an extension to nibabel.orientations.apply_orientation.

FastSurferCNN.data_loader.conform.apply_vox2vox(image_data, vox2vox, out_shape, order=1, vox_eps=0.0001, rot_eps=1e-06)[source]

Map image to new voxel space (RAS orientation).

Parameters:
image_datanp.ndarray

The 3D image data.

vox2voxnp.ndarray

To-apply out2in vox2vox (!) for consistentcy with scipy.ndimage.affine_transform.

out_shapetuple[int, …], np.ndarray

The target shape information.

orderint, default=1

Order of interpolation (0=nearest,1=linear,2=quadratic,3=cubic).

vox_epsfloat, default=1e-4

The epsilon for the voxelsize check.

rot_epsfloat, default=1e-6

The epsilon for the affine rotation check.

Returns:
np.ndarray

Mapped image data array.

FastSurferCNN.data_loader.conform.check_affine_in_nifti(img, logger=None)[source]

Check the affine in nifti Image.

Sets affine with qform, if it exists and differs from sform. If qform does not exist, voxel sizes between header information and information in affine are compared. In case these do not match, the function returns False (otherwise True).

Parameters:
imgnib.Nifti1Image, nib.Nifti2Image

Loaded nifti-image.

loggerlogging.Logger, optional

Logger object or None (default) to log or print an info message to stdout (for None).

Returns:
bool

False, if voxel sizes in affine and header differ.

FastSurferCNN.data_loader.conform.conform(img, order=1, vox_size=1.0, img_size=256, dtype=<class 'numpy.uint8'>, orientation='lia', threshold_1mm=None, rescale=255, vox_eps=0.0001, rot_eps=1e-06, file_type=None, **kwargs)[source]

Python version of mri_convert -c.

mri_convert -c by default turns image intensity values into UCHAR, reslices images to standard position, fills up slices to standard 256x256x256 format and enforces 1mm or minimum isotropic voxel sizes.

Parameters:
imgnib.spatialimages.SpatialImage

Loaded source image.

orderint, default=1

Interpolation order (0=nearest, 1=linear, 2=quadratic, 3=cubic).

vox_sizefloat, “min”, None, default=1.0

Conform the image to this voxel size, a specific smaller voxel size (0-1, for high-res), or automatically determine the ‘minimum voxel size’ from the image (value ‘min’). This assumes the smallest of the three voxel sizes. None disables this criterion.

img_sizeint, “fov”, “auto”, None, default=256

Conform the image to this image size, e.g. a specific smaller size (for example for high-res), or automatically determine the image size from the field of view (‘fov’ or ‘auto’, the former may yield non-cube-images). None disables this criterion.

dtypetype, None, default=np.uint8

The dtype to enforce in the image (default: UCHAR, as mri_convert -c). None disregards this criterion.

orientation“soft-<orientationcode>”, “<orientationcode>”, “native”, None, default=”lia”

Which orientation of the data/affine to force, <orientationcode> is [rlapsi]{3}, ie.e. any of lia, ras, etc., None disables this criterion.

threshold_1mmfloat, optional

The threshold above which the image is conformed to 1mm. Ignore, if None (default).

rescaleint, float, None, default=255

Whether intensity values should be rescaled, it will either be the upper limit or None to ignore rescaling.

vox_epsfloat, default=1e-4

The epsilon for the voxelsize check.

rot_epsfloat, default=1e-6

The epsilon for the affine rotation check.

file_typeclass, optional

The class to use for the image object. If None, will use the class of img.

Returns:
nibabel.spatialimages.SpatialImage

Conformed image.

Other Parameters:
conform_vox_sizefloat, optional

Legacy parameter for vox_size, overwrites vox_size.

conform_to_1mm_thresholdfloat, optional

Legacy parameter for threshold_1mm, overwrites threshold_1mm.

Notes

Unlike mri_convert -c, we first interpolate (float image), and then rescale to uchar. mri_convert is doing it the other way around. However, we compute the scale factor from the input to increase similarity.

FastSurferCNN.data_loader.conform.conformed_vox_img_size(img, vox_size, img_size, threshold_1mm=None, vox_eps=0.0001, **kwargs)[source]

Extract the voxel size and the image size.

This function only needs the header (not the data).

Parameters:
imgnib.spatialimages.SpatialImage

Loaded source image.

vox_sizefloat, “min”, None

The voxel size parameter to use: either a voxel size as float, or the string “min” to automatically find a suitable voxel size (smallest per-dimension voxel size). None disregards the criterion (output also None).

img_sizeint, “fov”, “auto”, None

The image size parameter: either an image size as int, the string “fov” to automatically derive a suitable image size (field of view), or “auto” like “fov” but largest size in every direction. None disregards the criterion, if vox_size is also None, else like “auto”.

threshold_1mmfloat, optional

The threshold for which image voxel size should be conformed to 1mm instead of conformed to the smallest voxel size (default or None: do not apply the threshold).

vox_epsfloat, default=1e-4

The threshold to compare vox_sizes (differences below this are ignored).

Returns:
np.ndarray of floats, None

The determined voxel size to conform the image to (still in native orientation), shape: 3.

np.ndarray of ints, None

The size of the image adjusted to the conformed voxel size (still in native orientation), shape: 3.

FastSurferCNN.data_loader.conform.crop_transform(image, offsets=None, target_shape=None, out=None, pad=0)[source]

Perform a crop transform of the last N dimensions on the image data.

Cropping does not interpolate the image, but “just removes” border pixels/voxels. Negative offsets lead to padding.

Parameters:
imagenp.ndarray, torch.Tensor

Image of size […, D_1, D_2, …, D_N], where D_1, D_2, …, D_N are the N image dimensions.

offsetsSequence[int], optional

Offset of the cropped region for the last N dimensions (default: center crop, less crop/pad towards index 0). Negative offsets pad.

target_shapeSequence[int], optional

If defined, target_shape specifies the target shape of the “cropped region”, else the crop will be centered cropping offset[dim] voxels on each side (then the shape is derived by subtracting 2x the dimension-specific offset). target_shape should have the same number of elements as offsets. May be implicitly defined by out.

outnp.ndarray, torch.Tensor, optional

Array to store the cropped image in (optional), can be a view on image for memory-efficiency.

padint, str, default=0/zero-pad

Padding strategy to use when padding is required, if int, pad with that value.

Returns:
outnp.ndarray, torch.Tensor

The image (stack) cropped in the last N dimensions by offsets to the shape target_shape, or if target_shape is not given image.shape[i+2] - 2*offset[i].

Raises:
ValueError

If neither offsets nor target_shape nor out are defined.

ValueError

If out is not target_shape.

TypeError

If the type of image is not an np.ndarray or a torch.Tensor.

RuntimeError

If the dimensionality of image, out, offset or target_shape is invalid or inconsistent.

See also

numpy.pad

For additional information refer to numpy.pad function.

Notes

Either offsets, target_shape or out must be defined.

FastSurferCNN.data_loader.conform.does_vox2vox_rot_require_interpolation(vox2vox, vox_eps=0.0001, rot_eps=1e-06)[source]

Check whether the affine requires resampling/interpolation or whether reordering is sufficient.

Parameters:
vox2voxAffinematrix4x4, AffineMatrix3x3

The affine matrix (direction does not matter for this check).

vox_epsfloat, default=1e-4

The epsilon for the voxelsize check.

rot_epsfloat, default=1e-6

The epsilon for the affine rotation check.

Returns:
bool

Whether the vox2vox matrix requires resampling, integer-value downsampling (e.g. solvable by strides) by definition also requires interpolation.

FastSurferCNN.data_loader.conform.getscale(data, dst_min, dst_max, f_low=0.0, f_high=0.999)[source]

Get offset and scale of image intensities to robustly rescale to dst_min..dst_max.

Equivalent to how mri_convert conforms images.

Parameters:
datanp.ndarray

Image data (intensity values).

dst_minfloat, int

Future minimal intensity value.

dst_maxfloat, int

Future maximal intensity value.

f_lowfloat, default=0.0

Robust cropping at low end (0.0=no cropping).

f_highfloat, default=0.999

Robust cropping at higher end (0.999=crop one thousandth of highest intensity).

Returns:
float src_min

(adjusted) offset.

float

Scale factor.

FastSurferCNN.data_loader.conform.is_conform(img, vox_size=1.0, img_size=256, dtype=<class 'numpy.uint8'>, orientation='lia', verbose=True, vox_eps=0.0001, eps=1e-06, threshold_1mm=0.0, **kwargs)[source]

Check if an image is already conformed or not.

Defaults: Dimensions: 256x256x256, Voxel size: 1x1x1, LIA orientation, and data type UCHAR.

Parameters:
imgnib.analyze.SpatialImage

Loaded source image.

vox_sizefloat, “min”, None, default=1.0

Which voxel size to conform to. Can either be a float between 0.0 and 1.0, ‘min’ (to check, whether the image is conformed to the minimal voxels size, i.e. conforming to smaller, but isotropic voxel sizes for high-res), or None to disable the criteria.

img_sizeint, “fov”, “auto”, None, default=256

Conform the image to this image size, a specific smaller size (0-1, for high-res), or automatically determine the target size: “fov”: derive from the fov per dimension; “auto”: get the largest “fov” and use this 3 times.

dtypeType, None, default=numpy.uint8

Specifies the intended target dtype, if None the dtype check is disabled.

orientation“soft-XXX”, “XXX”, “native”, None, default=”lia”

Whether to force the conforming to a specific orientation specified by XXX, e.g. LIA.

verbosebool, default=True

If True, details of which conformance conditions are violated (if any) are displayed.

vox_epsfloat, default=1e-4

Allowed deviation from zero for voxel size check.

epsfloat, default=1e-6

Allowed deviation from zero for the orientation check. Small inaccuracies can occur through the inversion operation. Already conformed images are thus sometimes not correctly recognized. The epsilon accounts for these small shifts.

threshold_1mmfloat, optional

Above this threshold the image is conformed to 1mm (default: None = ignore).

Returns:
bool:

Whether the image is already conformed.

Notes

This function only needs the header (not the data).

FastSurferCNN.data_loader.conform.is_orientation(affine, target_orientation='lia', soft=False, eps=1e-06)[source]

Checks whether the affine is LIA-oriented.

Parameters:
affineAffineMatrix4x4

The affine to check.

target_orientationOrientationType, default=”lia”

The target orientation for which to check the affine for.

softbool, default=True

Whether the orientation is required to be “exactly” (strict) LIA or just similar (soft) (i.e. it is roughly oriented as target_orientation).

epsfloat, default=1e-6

The threshold in strict mode.

Returns:
bool

Whether the affine is LIA-oriented.

FastSurferCNN.data_loader.conform.make_parser()[source]

Create an Argument parser for the conform script.

Returns:
argparse.ArgumentParser

The parser object.

FastSurferCNN.data_loader.conform.map_image(img, out_affine, out_shape, ras2ras=None, order=1, dtype=None, vox_eps=0.0001, rot_eps=1e-06)[source]

Map image to new voxel space (RAS orientation).

Parameters:
imgnibabel.spatialimages.SpatialImage

The src 3D image with data and affine set.

out_affineAffineMatrix4x4

Trg image affine.

out_shapetuple[int, …], np.ndarray of int

The target shape information.

ras2rasAffineMatrix4x4, optional

An additional mapping that should be applied (default=id to just reslice).

orderint, default=1

Order of interpolation (0=nearest,1=linear,2=quadratic,3=cubic).

dtypeType, None, default=None

Target dtype of the resulting image (especially relevant for reorientation, None=keep dtype of img).

vox_epsfloat, default=1e-4

The epsilon for the voxelsize check.

rot_epsfloat, default=1e-6

The epsilon for the affine rotation check.

Returns:
np.ndarray

Mapped image data array.

FastSurferCNN.data_loader.conform.options_parse()[source]

Command line option parser.

Returns:
options

Object holding options.

FastSurferCNN.data_loader.conform.ornt2vox2vox(ornt, shape, scale=None)[source]

Calculate the mid-centered vox2vox matrix of the orientation transform ornt (operation, not target orientation).

Parameters:
orntarray_like

The orientation to transform by. Importantly, if nibabel calls it axcode LIA, this is a LIA->RAS transform.

shapearray_like

The shape of the (input) data.

scalearray_like, optional

The scaling factor of the (input) data, defaults to 1. If scale is not one, the assumed target shape will be shape scaled by scale as computed by target_shape_from_shape_scale (so out_vox_size / in_vox_size).

Returns:
AffineMatrix4x4

The transformation affine, a homogeneous affine if shape is passed. Importantly, the convention is that the matrix is out2in! nib.orientations.aff2axcodes(vox2vox) yields the ornt that was passed in, and so that the transformation can be applied by apply_vox2vox(image_data, vox2vox, out_shape=target_shape_from_shape_scale(shape, scale)) or scipy.ndimage.affine_transform(...).

See also

target_shape_from_shape_scale

Generate the target shape from input scale and scale factor.

apply_vox2vox

Apply a vox2vox matrix to a 3D image.

FastSurferCNN.data_loader.conform.rescale(data, dst_min, dst_max, f_low=0.0, f_high=0.999)[source]

Rescale image intensity values (0-255).

Parameters:
datanp.ndarray

Image data (intensity values).

dst_minfloat

Future minimal intensity value.

dst_maxfloat

Future maximal intensity value.

f_lowfloat, default=0.0

Robust cropping at low end (0.0=no cropping).

f_highfloat, default=0.999

Robust cropping at higher end (0.999=crop one thousandth of highest intensity).

Returns:
np.ndarray

Scaled image data.

FastSurferCNN.data_loader.conform.scalecrop(data, dst_min, dst_max, src_min, scale)[source]

Crop the intensity ranges to specific min and max values.

Parameters:
datanp.ndarray

Image data (intensity values).

dst_minfloat

Future minimal intensity value.

dst_maxfloat

Future maximal intensity value.

src_minfloat

Minimal value to consider from source (crops below).

scalefloat

Scale value by which source will be shifted.

Returns:
np.ndarray

Scaled image data.

FastSurferCNN.data_loader.conform.target_shape_from_shape_scale(shape, scale)[source]

Calculate a target shape, that would enclose input shape after rescaling by scale.

Parameters:
shapearray_like

The shape of the input data.

scalearray_like

The scale factors of the input data (out_vox_size / in_vox_size).

Returns:
int

The shape resized by the scale and rounded.

FastSurferCNN.data_loader.conform.to_dtype(dtype)[source]

Make sure to convert dtype to a numpy compatible dtype.

Parameters:
dtypestr, np.dtype

Use this to determine the dtype.

Returns:
numpy.typing.DTypeLike

The dtype extracted.