Inpainting Module
- neurolit.inpaint_image.resolve_inference_device(device: str) device[source]
Resolve the requested inference device.
- Parameters:
device (str) – Requested device name:
"auto","cpu", or"cuda".- Returns:
Resolved torch device.
- Return type:
- neurolit.inpaint_image.dilate_mask(mask: Tensor, num_iterations: int, kernel_size: int = 3) Tensor[source]
Dilate a binary mask using repeated max pooling.
- Parameters:
mask (torch.Tensor) – Binary mask tensor to dilate.
num_iterations (int) – Number of dilation steps to apply.
kernel_size (int, optional) – Size of the dilation kernel (must be odd), by default 3.
- Returns:
Dilated mask tensor of the same shape as the input.
- Return type:
- neurolit.inpaint_image.conform_nifti(image: Nifti1Image) Nifti1Image[source]
Conform a NIfTI image to the repository orientation/voxel standard.
- Parameters:
image (NiftiImage) – Input image that should be conformed.
- Returns:
Conformed image with standardized affine/voxel size.
- Return type:
NiftiImage
- neurolit.inpaint_image.get_slice_from_volume(volume: Tensor, slice_dim: int, slice_cut: int, thickness: int) Tensor[source]
Extract a slice from a volume with a specified thickness.
- Parameters:
volume (torch.Tensor) – Tensor representing the volume to slice.
slice_dim (int) – Dimension to slice along.
slice_cut (int) – Index at the center of the slice.
thickness (int) – Total thickness of the slice (number of voxels).
- Returns:
Extracted slice tensor.
- Return type:
- neurolit.inpaint_image.inpaint_volume(models: dict[str, Module], val_image: Tensor, mask: Tensor, val_image_masked: Tensor, scale_factor: float | None = None, out_dir: str | Path | None = None, slice_dim: int | None = None, slice_input: bool = True, SAVE_VOLUMES: bool = True, SAVE_IMAGES: bool = True, device: device | str = 'cuda', DDIM: bool = False, val_image_nib: Nifti1Image | None = None, reference_image_nib: Nifti1Image | None = None, pad_multiple: int = 16, num_inference_steps: int = 1000, batch_size: int = 8) Tensor[source]
Inpaint a volume using the trained diffusion models.
- Parameters:
models (ModelDict) – Dictionary mapping view names to model instances.
val_image (torch.Tensor) – Input image tensor (B, C, H, W, D).
mask (torch.Tensor) – Binary mask tensor of the same shape as
val_image.val_image_masked (torch.Tensor) – Masked version of the input image.
scale_factor (Optional[float], optional) – Scaling factor applied during inference, by default
None.out_dir (Optional[PathLike], optional) – Directory to save outputs, by default
None.slice_dim (Optional[int], optional) – Dimensionality slice direction for 2D models, by default
None.slice_input (bool, optional) – Whether to slice the input volume, by default
True.SAVE_VOLUMES (bool, optional) – Whether to persist intermediate volumes, by default
True.SAVE_IMAGES (bool, optional) – Whether to persist intermediate images, by default
True.device (str, optional) – Device identifier (e.g.,
"cuda"), by default"cuda".DDIM (bool, optional) – Whether to use DDIM sampling instead of DDPM, by default
False.val_image_nib (Optional[NiftiImage], optional) – Original NIfTI image used for metadata, by default
None.
- Returns:
Inpainted volume with the same shape as the input.
- Return type:
- neurolit.inpaint_image.main(argv=None)[source]
Entry point for the inpainting CLI (debug mode).
Parses CLI arguments, prepares models, and runs
inpaint_volume.
Overview
The inpainting module provides direct access to the core inpainting functionality without the full pipeline wrapper.
Main Function
main
Core inpainting functionality.
Usage:
python3 -m neurolit.inpaint_image --input_image T1w.nii.gz \\
--mask_image mask.nii.gz \\
--out_dir output
Examples
Basic Inpainting
from neurolit.inpaint_image import main
import argparse
args = argparse.Namespace(
input_image='T1w.nii.gz',
mask_image='lesion_mask.nii.gz',
out_dir='output',
device='cuda',
batch_size=16,
num_samples=100
)
main(args)
Custom Parameters
from neurolit.inpaint_image import main
import argparse
# Use custom parameters
args = argparse.Namespace(
input_image='T1w.nii.gz',
mask_image='lesion_mask.nii.gz',
out_dir='output',
device='cuda',
batch_size=8, # Smaller batch size
num_samples=200, # More diffusion steps
model_axial='custom_models/axial.pt',
model_coronal='custom_models/coronal.pt',
model_sagittal='custom_models/sagittal.pt'
)
main(args)
CPU Mode
For systems without GPU:
args = argparse.Namespace(
input_image='T1w.nii.gz',
mask_image='lesion_mask.nii.gz',
out_dir='output',
device='cpu', # Use CPU
batch_size=4, # Smaller batch for CPU
num_samples=100
)
main(args)
Integration with Other Tools
Preprocessing Pipeline
from neurolit.inpaint_image import main
from neurolit.data.conform import conform_image
import argparse
# Step 1: Conform image
conform_image(
input_path='raw_T1w.nii.gz',
output_path='T1w_conformed.nii.gz'
)
# Step 2: Inpaint
args = argparse.Namespace(
input_image='T1w_conformed.nii.gz',
mask_image='lesion_mask.nii.gz',
out_dir='output',
device='cuda'
)
main(args)
Batch Processing
import subprocess
from pathlib import Path
data_dir = Path('data')
output_dir = Path('output')
for subject_dir in data_dir.glob('sub-*'):
subject_id = subject_dir.name
cmd = [
'python3', '-m', 'neurolit.inpaint_image',
'--input_image', str(subject_dir / 'T1w.nii.gz'),
'--mask_image', str(subject_dir / 'lesion_mask.nii.gz'),
'--out_dir', str(output_dir / subject_id)
]
subprocess.run(cmd, check=True)
print(f"Completed {subject_id}")