Usage Guide
This guide covers the basic and advanced usage of neuroLIT.
Basic Usage
Running neuroLIT with Containerization
The most straightforward way to run neuroLIT is using the containerized wrapper script:
./neurolit/scripts/run_lit_containerized.sh \\
--input_image T1w.nii.gz \\
--mask_image lesion_mask.nii.gz \\
--output_directory output_directory \\
--dilate 2
Key Parameters:
--input_image: Path to the T1-weighted MRI image--mask_image: Path to the lesion mask (binary or multi-class)--output_directory: Directory where outputs will be saved--dilate: Number of times to dilate the lesion mask (default: 0)
Running neuroLIT from PyPI
If you installed via pip:
lit-inpainting \\
--input_image T1w.nii.gz \\
--lesion_mask lesion_mask.nii.gz \\
--output_directory output_directory \\
--dilate 2
If the output directory is a FastSurfer subject directory, add --fastsurfer_dir to integrate
the outputs directly into the FastSurfer subject structure. In this mode, the inpainted image is
written to mri/inpainted.lit.nii.gz, the processed mask to mri/mask.lit.nii.gz, and the
original input mask to mri/orig/mask.lit.nii.gz.
Mask Dilation
We recommend performing mask dilation by default to account for potential undersegmentation e.g. --dilate 2
This will increase the size of the lesion mask, potentially removing more regions from the analysis.
When to use dilation:
Undersegmentation: Increase dilation
Uncertain boundaries: Use moderate dilation
Understanding the Outputs
neuroLIT produces several output files in the inpainting_volumes subdirectory:
Output Files
inpainting_result.nii.gz: The main output with lesions inpainted.
inpainting_mask.nii.gz: The (dilated) mask used for inpainting in the same space as the input.
inpainting_original_image.nii.gz: The conformed original input image.
File Structure
output_directory/
└── inpainting_volumes/
├── inpainting_result.nii.gz
├── inpainting_mask.nii.gz
└── inpainting_original_image.nii.gz
With --fastsurfer_dir, these outputs are instead integrated into the FastSurfer subject
directory, for example:
subject_directory/
├── mri/
│ ├── inpainted.lit.nii.gz
│ ├── mask.lit.nii.gz
│ └── orig/
│ ├── mask.lit.nii.gz
│ ├── inpainting_original_image.lit.nii.gz
│ └── inpainting_masked_image.lit.nii.gz
└── scripts/
└── inpainting_*.lit.png
Note
If the source image was isotropic, the output images will have the same resolution as the input image. The area outside of the lesion mask is preserved, except for robust rescaling of intensity values.
Advanced Usage
Direct Inpainting (Python API)
For programmatic access, call the inpainting entry point directly with an argument list:
from neurolit.inpaint_image import main as inpaint_main
inpaint_main([
"--input_image", "T1w.nii.gz",
"--mask_image", "lesion_mask.nii.gz",
"--out_dir", "output",
"--device", "cuda", # or 'cpu'
])
Batch Processing
For processing multiple subjects, create a simple loop:
#!/bin/bash
# List of subjects
subjects=("sub-01" "sub-02" "sub-03")
for sub in "${subjects[@]}"; do
echo "Processing $sub..."
lit-inpainting \\
--input_image data/${sub}/T1w.nii.gz \\
--mask_image data/${sub}/lesion_mask.nii.gz \\
--output_directory output/${sub} \\
--dilate 2
done
Or using Python:
import subprocess
from pathlib import Path
data_dir = Path("data")
subjects = ["sub-01", "sub-02", "sub-03"]
for subject in subjects:
print(f"Processing {subject}...")
cmd = [
"lit-inpainting",
"--input_image", str(data_dir / subject / "T1w.nii.gz"),
"--lesion_mask", str(data_dir / subject / "lesion_mask.nii.gz"),
"--output_directory", f"output/{subject}",
"--dilate", "2"
]
subprocess.run(cmd, check=True)
Command-Line Interface Reference
lit-inpainting
Main command to run the neuroLIT inpainting.
lit-inpainting [OPTIONS]
Options:
-i, --input_image PATH Path to input T1w image [required]
-m, --lesion_mask PATH Path to lesion mask [required]
-o, --sd / --out_dir / --output_directory PATH Output directory [required]
--dilate INTEGER Number of dilation iterations [default: 0]
--fastsurfer_dir Treat output_directory as a FastSurfer subject directory
--device [auto|cpu|cuda] Inference device [default: auto]
--batch_size INTEGER Slices per GPU batch [default: 8]; reduce to lower GPU memory usage
-h, --help Show this message and exit
lit-download-models
Download required model checkpoints.
lit-download-models [OPTIONS]
Options:
--force Force re-download even if models exist
--help Show this message and exit
lit-postprocessing
Integrate lesion masks into FastSurfer/FreeSurfer outputs.
lit-postprocessing [OPTIONS]
Options:
--subject-id TEXT Subject ID [required]
--subjects-dir PATH Subjects directory [required]
--skip-segstats Skip volumetric statistics
--skip-surface-masking Skip surface masking
--help Show this message and exit
Best Practices
Input Data
Image Quality: Use high-quality T1-weighted images (0.8-1 mm isotropic preferred)
Mask Quality: Ensure lesion masks are accurate; oversegmentation is better than undersegmentation.
Performance
GPU Usage: Use GPU when available for significant speedup. CPU is feasible for batch or overnight processing; see Expected Runtimes in the README.
Mask Size: Larger lesion masks require longer inference.
Quality Control
Visual Inspection: It is recommended to visually inspect inpainting results.
Boundary Check: Errors can happen especially at the lesion boundaries if the lesion is undersegmented, increasing dilation can help.
Postprocessing
neuroLIT provides tools to integrate lesion masks into FastSurfer/FreeSurfer segmentation and surface outputs.
Unified Postprocessing Script
The recommended way to run postprocessing is using the unified lit-postprocessing command. This script handles mapping the lesion mask to multiple segmentation files, running volume statistics (segstats), and performing surface masking.
# Setup environment
export FASTSURFER_HOME=/path/to/FastSurfer
export FREESURFER_HOME=/path/to/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
# Run unified postprocessing
lit-postprocessing \\
--subject-id SUBJECT_ID \\
--subjects-dir /path/to/subjects_dir
Features:
Installation Validation: Automatically checks for FastSurfer or FreeSurfer.
Dynamic Configuration: Uses
segstats_config.jsonfor volumetric stats andsurfstats_config.jsonfor surface stats.Surface Stats: Runs
mris_anatomical_statscalls defined insurfstats_config.json.Surface Masking: Automatically processes both hemispheres.
Anatomy Reports: Automatically generates reports (Replaced, Reduced, and Adjacent labels) for mappings defined in
segstats_config.json.Fine-grained Control: Flags like
--skip-segstatsor--skip-surface-maskingare available.
Individual Postprocessing Tools
For granular control, you can run individual scripts:
lesion_to_segmentation.py: Inserts lesion labels into volumetric segmentation and generates anatomy reports.
lesion_to_surface.py: Projects lesion masks onto cortical surfaces.
Common Issues
Poor Inpainting Quality
Problem: Inpainted regions don’t look realistic
Solutions:
Ensure mask accurately covers the entire lesion. Increase mask dilation (try 3-5 voxels)
Verify that the input is a T1-weighted image
Check input image quality. The module is expected to work with 0.7-1.0 mm isotropic MRI resolutions (not resampled). Other resolutions may work, but are not tested.
Mask Not Applied Correctly
Problem: Output doesn’t show inpainting in expected regions
Solutions:
Verify mask and image are in the same space (open both in same viewer to check).
Check mask file is binary and has correct labels
Ensure mask and image have compatible dimensions (same number of voxels in each dimension).
Out of Memory Errors
Problem: CUDA out of memory error
Solutions:
Reduce batch size:
--batch_size 4or--batch_size 2Switch to CPU mode:
--device cpu(slower but avoids GPU memory limits)Process on a machine with more GPU memory