Inference Module
- class neurolit.inference.InpaintingInferer(inference_steps, scheduler, diffusion_model)[source]
Bases:
objectCoordinate diffusion-based inpainting iterations.
This class manages the inference process for diffusion-based inpainting, coordinating forward and backward diffusion steps to fill in masked regions.
- Parameters:
inference_steps (int) – Number of denoising timesteps to execute.
scheduler (monai.inferers.DiffusionInferer) – Scheduler that defines the diffusion timestep sequence.
diffusion_model (torch.nn.Module) – Model used to predict noise residuals at each timestep.
- __call__(mask: Tensor, image_masked: Tensor, num_resample_steps=10, num_resample_jumps=5, get_intermediates=False, scale_factor=None, *args, **kwargs)[source]
Inpaint masked regions by alternating forward and backward diffusion.
This method performs diffusion-based inpainting by iteratively denoising the image while preserving known regions defined by the mask.
- Parameters:
mask (torch.Tensor) – Binary mask tensor where zeros indicate known voxels.
image_masked (torch.Tensor) – Image tensor with masked regions that need inpainting.
num_resample_steps (int, optional) – Number of resampling loops per timestep, by default 10.
num_resample_jumps (int, optional) – Number of timesteps to skip before resampling, by default 5.
get_intermediates (bool, optional) – Whether to record intermediate outputs, by default False.
scale_factor (Any, optional) – Optional scaling factors passed to the diffusion model.
*args (Any) – Additional positional arguments forwarded to downstream calls.
**kwargs (Any) – Additional keyword arguments forwarded to downstream calls.
- Returns:
Denoised tensor, optionally paired with intermediate buffers.
- Return type:
- sample_forward_diffusion(image, t)[source]
Add noise to image at timestep t.
- Parameters:
image (torch.Tensor) – Tensor to perturb for forward diffusion.
t (int | torch.Tensor) – Current timestep identifier.
- Returns:
Noised tensor for the current timestep.
- Return type:
- diffusion_forward(image, t)[source]
Apply a single forward diffusion update using the scheduler betas.
- Parameters:
image (torch.Tensor) – Current reconstruction tensor.
t (int | torch.Tensor) – Current timestep index.
- Returns:
Tensor after applying forward diffusion noise.
- Return type:
- diffusion_backward(image, t, sf=None)[source]
Denoise image at timestep t with the diffusion model prediction.
- Parameters:
image (torch.Tensor) – Tensor to denoise.
t (torch.Tensor) – Timestep tensor provided to the scheduler.
sf (Any, optional) – Optional scale factors forwarded to the diffusion model, by default None.
- Returns:
Reconstruction for timestep
t-1.- Return type:
- class neurolit.inference.SliceWiseInpaintingInferer(dimensions, diffusion_model, scheduler, inference_steps)[source]
Bases:
InpaintingInfererProcess the volume one slice batch at a time along a fixed axis.
- Parameters:
dimensions (int) – Dimension index along which to extract slices.
diffusion_model (torch.nn.Module) – Model that predicts diffusion noise for each slice.
scheduler (monai.inferers.DiffusionInferer) – Scheduler controlling the timestep sequence.
inference_steps (int) – Number of diffusion timesteps to run for each slice.
- get_slice_from_volume(volume, slice_cut, dimension)[source]
Extract a slab centered at slice_cut.
- Parameters:
volume (torch.Tensor) – Volume tensor to sample from.
slice_cut (int) – Center index of the desired slice block.
dimension (int) – Spatial dimension to slice along.
- Returns:
Extracted slice tensor with thickness matching the model channels.
- Return type:
- static slice_selector(start_idx, end_idx, dimension)[source]
Build a slab tuple for the requested range.
- get_inference_slices(mask, image_masked, dimension, offset=0)[source]
Collect slice batches and indices for inference along dimension.
- Parameters:
mask (torch.Tensor) – Binary mask tensor for the entire volume.
image_masked (torch.Tensor) – Volume tensor with masked regions to inpaint.
dimension (int) – Spatial axis for extracting slices.
offset (int, optional) – Offset applied to avoid checkerboard artifacts, by default 0.
- Returns:
Batched slices, batched masks, the slice centers, and the pre-populated output tensor.
- Return type:
- __call__(mask: Tensor, image_masked: Tensor, batch_size=1, num_resample_steps=10, num_resample_jumps=5, get_intermediates=False, scale_factor=None, *args, **kwargs)[source]
Inpaint the volume slice-wise.
- Parameters:
mask (torch.Tensor) – Binary mask volume indicating known voxels.
image_masked (torch.Tensor) – Prefiltered volume with masked regions to inpaint.
batch_size (int, optional) – Number of slices to process per batch, by default 1.
num_resample_steps (int, optional) – Resampling loops per timestep, by default 10.
num_resample_jumps (int, optional) – Timesteps between resampling events, by default 5.
get_intermediates (bool, optional) – Whether to collect intermediate reconstructions, by default False.
scale_factor (Any, optional) – Scale factors forwarded to the diffusion model, by default None.
*args (Any) – Additional positional arguments forwarded to the parent class.
**kwargs (Any) – Additional keyword arguments forwarded to the parent class.
- Returns:
Reconstructed volume, optionally with intermediate slices.
- Return type:
- class neurolit.inference.TwoAndHalfDInpaintingInferer(diffusion_model_dict, scheduler, inference_steps)[source]
Bases:
SliceWiseInpaintingInfererAggregate slice-wise inpainting across axial, sagittal, and coronal views.
- Parameters:
diffusion_model_dict (dict[str, torch.nn.Module]) – Mapping from plane names to their diffusion models.
scheduler (monai.inferers.DiffusionInferer) – Scheduler controlling the diffusion timesteps.
inference_steps (int) – Number of diffusion steps to perform per slice.
- view_agg_inference(image_masked: Tensor, mask: Tensor, batch_size: int, inference_slices: dict, num_resample_steps: int, num_resample_jumps: int, get_intermediates: bool, scale_factor=None, verbose=True)[source]
Inpaint the volume by switching views and offsets.
- Parameters:
image_masked (torch.Tensor) – Input tensor with masked regions to reconstruct.
mask (torch.Tensor) – Binary mask tensor indicating the known region.
batch_size (int) – Number of slices to process per batch.
inference_slices (dict) – Precomputed slice batches and masks for each plane.
num_resample_steps (int) – Number of resampling loops per timestep.
num_resample_jumps (int) – Number of timesteps to skip before each resampling.
get_intermediates (bool) – Whether to collect intermediate reconstructions.
scale_factor (Any, optional) – Scale factors passed to the diffusion model, by default None.
verbose (bool, optional) – Whether to show a progress bar, by default True.
- Returns:
Reconstructed volume or tuple with collected intermediates if requested.
- Return type:
- denoise(t, mask: Tensor, image_masked: Tensor, image_inpainted: Tensor, num_resample_steps=10, num_resample_jumps=5, scale_factor=None)[source]
Refine the unknown regions through backward and forward diffusion.
- Parameters:
t (torch.Tensor) – Current timestep identifier.
mask (torch.Tensor) – Binary mask delineating known voxels.
image_masked (torch.Tensor) – Reference tensor containing the known region.
image_inpainted (torch.Tensor) – Current reconstruction that is being denoised.
num_resample_steps (int, optional) – Number of resampling loops applied per timestep.
num_resample_jumps (int, optional) – Number of timesteps to jump before a resampling iteration.
scale_factor (Any, optional) – Optional scaling factors provided to the diffusion model.
- Returns:
Tensor after the current denoising iteration.
- Return type:
- __call__(mask: Tensor, image_masked: Tensor, batch_size=1, num_resample_steps=10, num_resample_jumps=5, get_intermediates=False, scale_factor=None)[source]
Prepare slice batches for all planes and run view-aggregated inference.
- Parameters:
mask (torch.Tensor) – Binary mask tensor that highlights the unknown regions.
image_masked (torch.Tensor) – Volume tensor with the known regions preserved.
batch_size (int, optional) – Batch size for slice inference, by default 1.
num_resample_steps (int, optional) – Resampling iterations per timestep, by default 10.
num_resample_jumps (int, optional) – Timesteps between resampling rounds, by default 5.
get_intermediates (bool, optional) – Whether to collect intermediate outputs, by default False.
scale_factor (Any, optional) – Optional scaling factors forwarded to the diffusion models.
- Returns:
Reconstructed volume, optionally paired with intermediates.
- Return type:
- class neurolit.inference.OffsetTwoAndHalfDInpaintingInferer(diffusion_model_dict, scheduler, inference_steps)[source]
Bases:
TwoAndHalfDInpaintingInfererTwo-and-a-half dimensional inferer that alternates slicing offsets.
- Parameters:
diffusion_model_dict (dict[str, torch.nn.Module]) – Mapping from plane names to their diffusion models.
scheduler (monai.inferers.DiffusionInferer) – Scheduler controlling the diffusion timesteps.
inference_steps (int) – Number of timesteps per slice inference.
- view_agg_inference(image_masked: Tensor, mask: Tensor, batch_size: int, num_resample_steps: int, num_resample_jumps: int, get_intermediates: bool, scale_factor=None, verbose=True)[source]
Inpaint the volume using offset slicing to avoid checkerboarding.
- Parameters:
image_masked (torch.Tensor) – Input tensor with masked regions to reconstruct.
mask (torch.Tensor) – Binary mask tensor indicating the known region.
batch_size (int) – Number of slices processed per batch.
num_resample_steps (int) – Number of resampling loops per timestep.
num_resample_jumps (int) – Timesteps between resampling events.
get_intermediates (bool) – Whether to collect intermediate outputs.
scale_factor (Any, optional) – Optional scaling factors for the diffusion model.
verbose (bool, optional) – Whether progress information is displayed.
- Returns:
Reconstructed volume, optionally paired with intermediates.
- Return type:
- __call__(mask: Tensor, image_masked: Tensor, batch_size=1, num_resample_steps=10, num_resample_jumps=5, get_intermediates=False, scale_factor=None)[source]
Prepare slices with alternating offsets and perform view aggregation.
- Parameters:
mask (torch.Tensor) – Binary mask tensor identifying regions to reconstruct.
image_masked (torch.Tensor) – Image tensor with masked regions preserved.
batch_size (int, optional) – Number of slices per batch, by default 1.
num_resample_steps (int, optional) – Resampling loops per timestep, by default 10.
num_resample_jumps (int, optional) – Timesteps between resampling events, by default 5.
get_intermediates (bool, optional) – Whether to collect intermediate outputs, by default False.
scale_factor (Any, optional) – Optional scaling factors passed to the diffusion model.
- Returns:
Reconstructed volume, optionally with intermediates.
- Return type:
- class neurolit.inference.AnomalyInferer(diffusion_model_dict, scheduler, inference_steps)[source]
Bases:
TwoAndHalfDInpaintingInfererDetect anomalies by detecting deviations from the expected noise distribution.
- Parameters:
diffusion_model_dict (dict[str, torch.nn.Module]) – Mapping from plane names to their diffusion models.
scheduler (monai.inferers.DiffusionInferer) – Scheduler controlling the diffusion timesteps.
inference_steps (int) – Number of timesteps to run per reconstruction.
- __call__(image: Tensor, batch_size=1, starting_t=0, num_resample_steps=10, num_resample_jumps=5, get_intermediates=False, scale_factor=None)[source]
Generate anomaly maps by denoising noise-injected copies of image.
- Parameters:
image (torch.Tensor) – Image tensor used as a reference for anomalous regions.
batch_size (int, optional) – Batch size for slice inference, by default 1.
starting_t (int, optional) – Starting timestep for denoising, by default 0.
num_resample_steps (int, optional) – Resampling loops per timestep, by default 10.
num_resample_jumps (int, optional) – Timesteps between resampling rounds, by default 5.
get_intermediates (bool, optional) – Whether to collect intermediate outputs, by default False.
scale_factor (Any, optional) – Optional scaling factors forwarded to the diffusion model.
- Returns:
Denoised tensor, optionally paired with intermediate samples.
- Return type:
- view_agg_inference(image: Tensor, batch_size: int, inference_slices: dict, starting_t: int, num_resample_steps: int, num_resample_jumps: int, get_intermediates: bool, scale_factor=None, verbose=True)[source]
Denoise the volume starting from starting_t for anomaly detection.
- Parameters:
image (torch.Tensor) – Image tensor to initialize the denoising process.
batch_size (int) – Number of slices processed per batch.
inference_slices (dict) – Precomputed slices for each anatomical plane.
starting_t (int) – Timestep index from which denoising begins.
num_resample_steps (int) – Number of resampling loops per timestep.
num_resample_jumps (int) – Timesteps between resampling rounds.
get_intermediates (bool) – Whether to collect intermediate outputs.
scale_factor (Any, optional) – Optional scale factors, by default None.
verbose (bool, optional) – Whether to show progress updates.
- Returns:
Denoised volume or tuple with intermediates when requested.
- Return type:
- denoise(t, image: Tensor, num_resample_steps=10, num_resample_jumps=5, scale_factor=None)[source]
Denoise image for starting_t forward iterations.
- Parameters:
t (torch.Tensor) – Current timestep identifier.
image (torch.Tensor) – Tensor being denoised.
num_resample_steps (int, optional) – Number of resampling loops per timestep.
num_resample_jumps (int, optional) – Timesteps between resampling events.
scale_factor (Any, optional) – Optional scaling factors for the diffusion model.
- Returns:
Tensor after the denoising iteration.
- Return type:
- class neurolit.inference.DiffusionInfererVINN(scheduler: Scheduler)[source]
Bases:
DiffusionInfererSampler that extends MONAI’s DiffusionInferer to support VINN conditioning.
The implementation handles VINN scale factors, optional conditioning modes, and exposes helper sampling routines used during training and inference.
- __call__(inputs: Tensor, diffusion_model: Callable[[...], Tensor], noise: Tensor, timesteps: Tensor, scale_factors: Tensor | None = None, condition=None, mode: str = 'crossattn') Tensor[source]
Implement the VINN forward pass for a training iteration.
- Parameters:
inputs (torch.Tensor) – Input image tensor to corrupt.
diffusion_model (Callable[..., torch.Tensor]) – Model predicting the noise residual.
noise (torch.Tensor) – Random noise tensor with the same shape as
inputs.timesteps (torch.Tensor) – Timestep identifiers for the scheduler.
scale_factors (torch.Tensor, optional) – Optional VINN scale factors to condition the model.
condition (Any, optional) – Conditioning tensor provided to the diffusion model.
mode (str, optional) – Conditioning mode, either
crossattnorconcat.
- Returns:
Predicted noise residuals at the given timesteps.
- Return type:
- sample(input_noise: Tensor, diffusion_model: Callable[[...], Tensor], scheduler, scale_factors: Tensor = None, save_intermediates: bool = False, intermediate_steps: int = 100, conditioning: Tensor = None, mode: str = 'crossattn', verbose: bool = True) Tensor[source]
Sample from the VINN diffusion model using the provided scheduler.
- Parameters:
input_noise (torch.Tensor) – Random noise tensor with the shape of the desired sample.
diffusion_model (Callable[..., torch.Tensor]) – Model used for sampling.
scheduler (Any) – Diffusion scheduler; if
Noneuses the inferer’s scheduler.scale_factors (torch.Tensor, optional) – Optional scale factors passed to the diffusion model.
save_intermediates (bool, optional) – Whether to return intermediate tensors, by default False.
intermediate_steps (int, optional) – Interval between saved intermediates when
save_intermediatesis True.conditioning (torch.Tensor, optional) – Optional conditioning tensor for the diffusion model.
mode (str, optional) – Conditioning mode, either
crossattnorconcat.verbose (bool, optional) – Whether to show a progress bar during sampling.
- Returns:
Sampled tensor, optionally paired with accumulated intermediates.
- Return type:
- sample_backward_forward(input_noise: Tensor, precond_img: Tensor, t_start: int, diffusion_model: Callable[[...], Tensor], scheduler, scale_factors: Tensor = None, save_intermediates: bool = False, intermediate_steps: int = 100, conditioning: Tensor = None, mode: str = 'crossattn', verbose: bool = True) Tensor[source]
Precondition the volume and then sample backward and forward with VINN.
- Parameters:
input_noise (torch.Tensor) – Random noise tensor similar to the desired output shape.
precond_img (torch.Tensor) – Image used to precondition the schedule before sampling begins.
t_start (int) – Timestep index to start the backward-forward sampling from.
diffusion_model (Callable[..., torch.Tensor]) – VINN diffusion model used for prediction.
scheduler (Any) – Scheduler to step through diffusion timesteps.
scale_factors (torch.Tensor, optional) – Optional scaling factors for the diffusion model.
save_intermediates (bool, optional) – Whether to return intermediate tensors, by default False.
intermediate_steps (int, optional) – Interval between recorded intermediates when enabled.
conditioning (torch.Tensor, optional) – Optional conditioning tensor for the diffusion model.
mode (str, optional) – Conditioning mode, either
crossattnorconcat.verbose (bool, optional) – Whether to display progress.
- Returns:
Sampled tensor, optionally paired with saved intermediates.
- Return type:
Overview
The inference module exposes the inferer classes that implement neuroLIT’s diffusion-based inpainting pipeline.
Key Concepts
Use the command-line entry points defined in neurolit.cli and neurolit.inpaint_image when you prefer packaged invocations of these inferers.