FastSurferCNN: run_prediction.py

Note

We recommend to run the surface pipeline with the standard run_fastsurfer.sh interfaces!

The FastSurferCNN directory contains all the source code and modules needed to run the scripts. A list of python libraries used within the code can be found in requirements.txt. The main script is called run_prediction.py within which certain options can be selected and set via the command line:

General

  • --in_dir: Path to the input volume directory (e.g /your/path/to/ADNI/fs60) or

  • --csv_file: Path to csv-file listing input volume directories

  • --t1: name of the T1-weighted MRI_volume (like mri_volume.mgz, default: orig.mgz)

  • --conformed_name: name of the conformed MRI_volume (the input volume is always first conformed, if not already, and the result is saved under the given name, default: orig.mgz)

  • --t: search tag limits processing to subjects matching the pattern (e.g. sub-* or 1030*…)

  • --sd: Path to output directory (where should predictions be saved). Will be created if it does not already exist.

  • --seg_log: name of log-file (information about processing is stored here; If not set, logs will not be saved). Saved in the same directory as the predictions.

  • --strip: strip suffix from path definition of input file to yield correct subject name. (Optional, if full path is defined for --t1)

  • --lut: FreeSurfer-style Color Lookup Table with labels to use in final prediction. Default: ./config/FastSurfer_ColorLUT.tsv

  • --seg: Name of intermediate DL-based segmentation file (similar to aparc+aseg).

Checkpoints and configs

  • --ckpt_sag: path to sagittal network checkpoint

  • --ckpt_cor: path to coronal network checkpoint

  • --ckpt_ax: path to axial network checkpoint

  • --cfg_cor: Path to the coronal config file

  • --cfg_sag: Path to the axial config file

  • --cfg_ax: Path to the sagittal config file

Optional commands

  • --clean: clean up segmentation after running it (optional)

  • --device <str>:Device for processing (auto, cpu, cuda, cuda:<device_num>), where cuda means Nvidia GPU; you can select which one e.g. “cuda:1”. Default: “auto”, check GPU and then CPU

  • --viewagg_device <str>: Define where the view aggregation should be run on. Can be auto or a device (see –device). By default (auto), the program checks if you have enough memory to run the view aggregation on the gpu. The total memory is considered for this decision. If this fails, or you actively overwrote the check with setting --viewagg_device cpu, view agg is run on the cpu. Equivalently, if you define --viewagg_device gpu, view agg will be run on the gpu (no memory check will be done).

  • --batch_size: Batch size for inference. Default=1

Example Command: Evaluation Single Subject

To run the network on MRI-volumes of subjectX in ./data (specified by --t1 flag; e.g. ./data/subjectX/t1-weighted.nii.gz), change into the FastSurferCNN directory and run the following commands:

python3 run_prediction.py --t1 ../data/subjectX/t1-weighted.nii.gz \
--sd ../output \
--t subjectX \
--seg_log ../output/temp_Competitive.log \

The output will be stored in:

  • ../output/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz (large segmentation)

  • ../output/subjectX/mri/mask.mgz (brain mask)

  • ../output/subjectX/mri/aseg_noCC.mgz (reduced segmentation)

Here the logfile “temp_Competitive.log” will include the logfiles of all subjects. If left out, the logs will be written to stdout

Example Command: Evaluation whole directory

To run the network on all subjects MRI-volumes in ./data, change into the FastSurferCNN directory and run the following command:

python3 run_prediction.py --in_dir ../data \
--sd ../output \
--seg_log ../output/temp_Competitive.log \

The output will be stored in:

  • ../output/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz (large segmentation)

  • ../output/subjectX/mri/mask.mgz (brain mask)

  • ../output/subjectX/mri/aseg_noCC.mgz (reduced segmentation)

  • and the log in ../output/temp_Competitive.log

Full commandline interface of FastSurferCNN/run_prediction.py

Evaluation metrics

usage: FastSurferCNN/run_prediction.py [-h] [--t1 ORIG_NAME] [--sid SID]
                                       [--in_dir IN_DIR] [--tag SEARCH_TAG]
                                       [--csv_file CSV_FILE] [--lut LUT]
                                       [--remove_suffix REMOVE_SUFFIX]
                                       [--asegdkt_segfile PRED_NAME]
                                       [--conformed_name CONF_NAME]
                                       [--brainmask_name BRAINMASK_NAME]
                                       [--aseg_name ASEG_NAME] [--sd OUT_DIR]
                                       [--seg_log LOG_NAME] [--qc_log QC_LOG]
                                       [--ckpt_ax CKPT_AX]
                                       [--ckpt_cor CKPT_COR]
                                       [--ckpt_sag CKPT_SAG] [--cfg_ax CFG_AX]
                                       [--cfg_cor CFG_COR] [--cfg_sag CFG_SAG]
                                       [--vox_size VOX_SIZE]
                                       [--conform_to_1mm_threshold CONFORM_TO_1MM_THRESHOLD]
                                       [--device DEVICE]
                                       [--viewagg_device VIEWAGG_DEVICE]
                                       [--batch_size BATCH_SIZE] [--async_io]
                                       [--threads THREADS] [--allow_root]

Named Arguments

--t1

Name of T1 full head MRI. Absolute path if single image else common image name. Default: mri/orig.mgz.

Default: “mri/orig.mgz”

--sid

Optional: directly set the subject id to use. Can be used for single subject input. For multi-subject processing, use remove suffix if sid is not second to last element of input file passed to –t1

--in_dir

Directory in which input volume(s) are located. Optional, if full path is defined for –t1.

--tag

Search tag to process only certain subjects. If a single image should be analyzed, set the tag with its id. Default: processes all.

Default: *

--csv_file

Csv-file with subjects to analyze (alternative to –tag)

--lut

Path and name of LUT to use.

Default: /home/runner/work/FastSurfer/FastSurfer/dev/FastSurferCNN/config/FastSurfer_ColorLUT.tsv

--remove_suffix

Optional: remove suffix from path definition of input file to yield correct subject name (e.g. /ses-x/anat/ for BIDS or /mri/ for FreeSurfer input). Default: do not remove anything.

Default: “”

--asegdkt_segfile, --aparc_aseg_segfile

Name of intermediate DL-based segmentation file (similar to aparc+aseg). When using FastSurfer, this segmentation is already conformed, since inference is always based on a conformed image. Absolute path if single image else common image name. Default: mri/aparc.DKTatlas+aseg.deep.mgz

Default: “mri/aparc.DKTatlas+aseg.deep.mgz”

--conformed_name

Name under which the conformed input image will be saved, in the same directory as the segmentation (the input image is always conformed first, if it is not already conformed). The original input image is saved in the output directory as $id/mri/orig/001.mgz. Default: mri/orig.mgz.

Default: “mri/orig.mgz”

--brainmask_name

Name under which the brainmask image will be saved, in the same directory as the segmentation. The brainmask is created from the aparc_aseg segmentation (dilate 5, erode 4, largest component). Default: mri/mask.mgz.

Default: “mri/mask.mgz”

--aseg_name

Name under which the reduced aseg segmentation will be saved, in the same directory as the aparc-aseg segmentation (labels of full aparc segmentation are reduced to aseg). Default: mri/aseg.auto_noCCseg.mgz.

Default: “mri/aseg.auto_noCCseg.mgz”

--sd

Directory in which evaluation results should be written. Will be created if it does not exist. Optional if full path is defined for –pred_name.

--seg_log

Absolute path to file in which run logs will be saved. If not set, logs will not be saved.

Default: “”

--qc_log

Absolute path to file in which a list of subjects that failed QC check (when processing multiple subjects) will be saved. If not set, the file will not be saved.

Default: “”

--ckpt_ax

axial checkpoint to load

Default: /home/runner/work/FastSurfer/FastSurfer/dev/checkpoints/aparc_vinn_axial_v2.0.0.pkl

--ckpt_cor

coronal checkpoint to load

Default: /home/runner/work/FastSurfer/FastSurfer/dev/checkpoints/aparc_vinn_coronal_v2.0.0.pkl

--ckpt_sag

sagittal checkpoint to load

Default: /home/runner/work/FastSurfer/FastSurfer/dev/checkpoints/aparc_vinn_sagittal_v2.0.0.pkl

--cfg_ax

Path to the axial config file

Default: /home/runner/work/FastSurfer/FastSurfer/dev/FastSurferCNN/config/FastSurferVINN_axial.yaml

--cfg_cor

Path to the coronal config file

Default: /home/runner/work/FastSurfer/FastSurfer/dev/FastSurferCNN/config/FastSurferVINN_coronal.yaml

--cfg_sag

Path to the sagittal config file

Default: /home/runner/work/FastSurfer/FastSurfer/dev/FastSurferCNN/config/FastSurferVINN_sagittal.yaml

--vox_size

Choose the primary voxelsize to process, must be either a number between 0 and 1 (below 0.7 is experimental) or ‘min’ (default). A number forces processing at that specific voxel size, ‘min’ determines the voxel size from the image itself (conforming to the minimum voxel size, or 1 if the minimum voxel size is above 0.95mm).

Default: min

--conform_to_1mm_threshold

The voxelsize threshold, above which images will be conformed to 1mm isotropic, if the –vox_size argument is also ‘min’ (the –vox_size default setting). Contrary to conform.py, the default behavior of FastSurferCNN/run_prediction.py is to resample all images above 0.95mm to 1mm.

Default: 0.95

--device

Select device to run inference on: cpu, or cuda (= Nvidia gpu) or specify a certain gpu (e.g. cuda:1), default: auto

Default: “auto”

--viewagg_device

Define the device, where the view aggregation should be run. By default, the program checks if you have enough memory to run the view aggregation on the gpu (cuda). The total memory is considered for this decision. If this fails, or you actively overwrote the check with setting > –viewagg_device cpu <, view agg is run on the cpu. Equivalently, if you define > –viewagg_device cuda <, view agg will be run on the gpu (no memory check will be done).

Default: “auto”

--batch_size

Batch size for inference. Default=1

Default: 1

--async_io

Allow asynchronous file operations (default: off). Note, this may impact the order of messages in the log, but speed up the segmentation specifically for slow file systems.

Default: False

--threads

Number of threads to use (defaults to number of hardware threads: 4)

Default: 4

--allow_root

Allow execution as root user.

Default: False