Skip to main content

AI-Powered Segmentation

AI-powered segmentation leverages deep learning models to automatically identify and delineate anatomical structures. This tutorial covers installation, configuration, and usage of the four supported frameworks: TotalSegmentator, nnU-Net, MONAI, and Cellpose-SAM.

AI Segmentation - TotalSegmentator

Estimated time: 40 minutes

Prerequisites:

  • Volvicon installed with valid license
  • Administrative privileges for framework installation
  • Stable internet connection
  • GPU recommended (NVIDIA CUDA) but CPU supported
Research Use Only

AI segmentation tools are intended for research purposes only and are not approved for clinical or diagnostic use by any regulatory body. Results should always be reviewed and validated by qualified professionals.


Understanding AI Segmentation

How It Works

Deep learning segmentation models:

  1. Process the input image through neural network layers
  2. Classify each voxel into anatomical categories
  3. Output segmentation masks for identified structures

These models are trained on large annotated datasets, learning to recognize patterns that distinguish different tissues and organs.

Available Frameworks

FrameworkSpecialtyStructuresModality
TotalSegmentatorWhole-body CT117 structuresCT
nnU-NetSelf-configuringDepends on modelVarious
MONAIMedical AI platformDepends on modelVarious
Cellpose-SAMInstance segmentationPer-object instancesAny (microscopy, CT, MRI)

Installing AI Frameworks

  1. Launch Volvicon as Administrator:

    • Right-click the Volvicon shortcut
    • Select Run as administrator
  2. Access AI Segmentation:

    • Navigate to Advanced → AI → AI Segmentation
  3. Select the framework you want to install.

  4. Click Install:

    • The application downloads and configures dependencies
    • Installation progress is displayed
    • Wait for completion (10-30 minutes depending on connection)
  5. Restart Volvicon when prompted.

Method 2: Manual Installation

If application-based installation fails:

  1. Navigate to the scripts directory:

    <Volvicon Installation>\scripts\python\
  2. Run the installation batch file as Administrator:

    • totalsegmentator_venv_windows_setup.bat
    • nnunet_venv_windows_setup.bat
    • monai_venv_windows_setup.bat
    • cellpose_venv_windows_setup.bat
  3. Wait for completion without closing the command window.

  4. Restart Volvicon.

Troubleshooting Installation

IssueSolution
Installation failsDelete virtual environment folder and retry
Permission errorsEnsure running as Administrator
Network errorsCheck firewall, verify internet connection
Corrupted environmentRemove <Volvicon Installation Directory>\ai\frameworks\venvs\<framework>-env (e.g., cellpose-env) and reinstall

TotalSegmentator

TotalSegmentator provides automatic segmentation of 117 anatomical structures in whole-body CT scans.

Supported Structures

TotalSegmentator segments:

  • Bones — Vertebrae, ribs, pelvis, skull, long bones
  • Organs — Heart, lungs, liver, kidneys, spleen, pancreas
  • Vessels — Aorta, vena cava, pulmonary vessels
  • Muscles — Major muscle groups
  • Other — Airways, fat, skin

Step-by-Step Workflow

  1. Load your CT data:

    • Import DICOM or NIfTI CT scan
    • Ensure proper spacing and orientation
  2. Open AI Segmentation:

    • Navigate to Advanced → AI → AI Segmentation
  3. Select TotalSegmentator from the framework dropdown.

  4. Configure settings:

    SettingDescription
    TaskSegmentation task (total, lung_vessels, body, etc.)
    DeviceCPU or GPU (GPU faster if available)
    Fast modeQuicker processing, slightly reduced accuracy
  5. Select the input volume.

  6. Click Run Segmentation.

  7. Monitor progress in the output log.

  8. Review results:

    • Segmentation masks are automatically imported
    • Each structure appears as a separate mask
    • Review in slice views and 3D

TotalSegmentator Tasks

TaskDescription
totalComplete 117-structure segmentation
lung_vesselsDetailed lung vessel segmentation
cerebral_bleedIntracranial hemorrhage detection
hip_implantHip implant segmentation
coronary_arteriesCoronary artery segmentation
bodyBody outline only
pleural_pericard_effusionEffusion detection
Processing Time

Full "total" segmentation takes 1-5 minutes on GPU, 10-30 minutes on CPU. Use "fast" mode for quicker results during exploration.


nnU-Net

nnU-Net (no-new-Net) is a self-configuring deep learning framework that adapts to different segmentation tasks.

Characteristics

  • Self-configuring — Automatically optimizes for each task
  • Flexible — Works with various imaging modalities
  • Custom models — Can use pre-trained or custom models

Step-by-Step Workflow

  1. Open AI Segmentation:

    • Navigate to Advanced → AI → AI Segmentation
  2. Select nnU-Net from the framework dropdown.

  3. Configure settings:

    SettingDescription
    Trained model directoryPre-trained models directory
    DatasetPre-trained model selection
    Configuration2d, 3d_fullres, 3d_lowres, 3d_lowres_high (model-dependent), 3d_cascade_fullres
    DeviceCPU or GPU
    npp / npsWorker-process counts for preprocessing/export (0/0 = sequential mode)
  4. Select the input volume.

  5. Click Run Segmentation.

  6. Review results after processing completes.

Using Custom nnU-Net Models

To use models trained outside Volvicon:

  1. Place model files in the appropriate directory.
  2. Configure the model path in settings.
  3. Ensure model compatibility with input data format.

nnU-Net Stability Guidance (Windows)

note

For some models (including dental pipelines such as ToothFairy variants), inference can complete successfully while the multiprocessing export stage fails on Windows (WinError 87, SpawnPoolWorker).

  • npp=0 and nps=0 are the default values in Volvicon, which runs nnU-Net in sequential mode. If you have previously modified these settings, reset them to 0/0 for maximum stability on Windows.
  • If GPU memory is limited, nnU-Net may keep model inference on CUDA but move result arrays to CPU. This is expected behavior.
  • Keep fold selection minimal (fold 0) unless you explicitly require ensembling across folds.
tip

Use AI Segmentation as the parameter reference, and Batch Processing and Automation when scaling the same workflow to multiple volumes.


MONAI

MONAI (Medical Open Network for AI) provides a collection of pre-trained models for various medical imaging tasks.

Available Models

MONAI supports multiple pre-trained models:

  • Organ segmentation
  • Tumor detection
  • Anatomical landmark identification
  • Custom trained models

Step-by-Step Workflow

  1. Open AI Segmentation:

    • Navigate to Advanced → AI → AI Segmentation
  2. Select MONAI from the framework dropdown.

  3. Configure settings:

    SettingDescription
    Models directorySelect the pre-trained models directory
    BundleSelect from available pre-trained models
    DeviceCPU or GPU
  4. Select the input volume.

  5. Click Run Segmentation.

  6. Review results.


Cellpose-SAM

Cellpose-SAM is a general-purpose instance segmentation model that identifies and labels individual objects (cells, nuclei, particles) rather than tissue classes. Each detected object receives a unique integer label, making it ideal for object counting and morphometric analysis.

Characteristics

  • Instance segmentation — Each detected object gets a unique label, not a class label
  • Modality-agnostic — Works with microscopy, CT, MRI, and other imaging types
  • 2D and 3D — Supports single-slice and volumetric segmentation
  • Zero-shot — No custom model training required; uses a general pre-trained model
  • Automatic download — Model weights are cached on first use (~100 MB)

Step-by-Step Workflow (3D)

  1. Open AI Segmentation:

    • Navigate to Advanced → AI → AI Segmentation
  2. Select Cellpose-SAM from the framework dropdown.

  3. Configure settings:

    SettingDescription
    Mode3D for volumetric, 2D for single-slice
    DeviceCPU or GPU
    Flow ThresholdMask quality filter (default: 0.4)
    Cell Prob ThresholdDetection sensitivity (default: 0.0)
    DiameterExpected object size in pixels (0 = auto)
    Min SizeMinimum object area in pixels (default: 15)
    Exclude on EdgesRemove objects touching image boundary
    AnisotropyZ-to-XY spacing ratio (3D only)
    Stitch ThresholdSlice stitching IoU (0 = native 3D)
  4. Select the input volume.

  5. Click Run Segmentation.

  6. Review results:

    • Each detected object appears with a unique label value
    • Use Split Mask to separate instances into individual masks
    • Generate 3D previews of individual objects

Step-by-Step Workflow (2D)

  1. Navigate to the slice you want to segment.
  2. Select Cellpose-SAM and set Mode to 2D.
  3. Set Slice Index to -1 (active slice) or specify a slice number.
  4. Click Run Segmentation.
  5. Review results — only the selected slice contains labels.

Parameter Tuning

GoalAdjustment
Detect more objectsLower Cell Prob Threshold (e.g., −2.0)
Reduce false positivesRaise Cell Prob Threshold (e.g., 2.0)
Stricter mask boundariesLower Flow Threshold (e.g., 0.2)
Accept uncertain detectionsRaise Flow Threshold (e.g., 0.8)
Remove small fragmentsIncrease Min Size
Exclude edge objectsEnable Exclude on Edges
Faster 3D processingUse Stitch Mode (Stitch Threshold ≈ 0.9)
First Run

The first execution downloads the model automatically (~100 MB). Subsequent runs are faster since the model is cached locally in ~/.cellpose/models/.


Post-Processing AI Results

AI segmentation outputs typically require review and refinement.

Reviewing Results

  1. Examine each structure:

    • Scroll through slices to verify accuracy
    • Generate 3D previews for spatial verification
    • Compare with the original volume
  2. Identify issues:

    • Missing regions
    • Incorrectly labeled areas
    • Boundary inaccuracies

Common Refinements

IssueSolution
Small holesUse Cavity Fill or Fill Holes
Noisy boundariesApply Smooth Mask
Missing regionsUse Edit Mask to add
Incorrect regionsUse Edit Mask to remove
Multiple labels mergedUse Split Mask or manual separation

Combining AI with Manual Editing

Optimal workflow:

  1. Run AI segmentation for initial masks.
  2. Review results systematically.
  3. Apply automated refinements (smooth, fill holes).
  4. Manually correct remaining errors.
  5. Validate final segmentation.

Performance Optimization

GPU Acceleration

For faster processing:

  1. Ensure NVIDIA GPU with CUDA support
  2. Install appropriate CUDA drivers
  3. Select GPU in device settings
  4. Monitor GPU memory usage

Memory Management

TipBenefit
Process cropped dataReduces memory requirements
Close other applicationsFrees system resources
Use fast modeReduces computational load
Process one dataset at a timePrevents memory conflicts

Batch Processing

For multiple datasets, see Batch Processing and Automation tutorial.


Practical Exercise: Whole-Body CT Segmentation

Part 1: Run TotalSegmentator

  1. Load a whole-body or abdominal CT scan.
  2. Navigate to Advanced → AI → AI Segmentation.
  3. Select TotalSegmentator.
  4. Choose total task.
  5. Select GPU if available, otherwise CPU.
  6. Enable Fast mode for initial testing.
  7. Click Run Segmentation.
  8. Wait for completion.

Part 2: Review Results

  1. Examine the created masks in Object Browser.
  2. Enable visibility for key structures (liver, kidneys, spine).
  3. Scroll through slice views to verify accuracy.
  4. Generate 3D previews for selected structures.
  5. Note any obvious errors.

Part 3: Refine Selected Structures

  1. Select a structure that needs refinement (e.g., liver).
  2. Apply Smooth Mask with median filter.
  3. Use Edit Mask to correct any significant errors.
  4. Apply Cavity Fill if internal voids exist.

Part 4: Generate Surfaces

  1. Select refined masks.
  2. Use Mask to Surface to generate meshes.
  3. Apply appropriate smoothing and decimation.
  4. Export surfaces if needed.

Practical Exercise: Instance Segmentation with Cellpose-SAM

Part 1: Run Cellpose-SAM

  1. Load a volumetric image containing distinguishable objects (e.g., cells, particles, or structures).
  2. Navigate to Advanced → AI → AI Segmentation.
  3. Select Cellpose-SAM.
  4. Set Mode to 3D.
  5. Leave Diameter at 0 for automatic estimation.
  6. Set Device to GPU if available.
  7. Click Run Segmentation.
  8. Wait for completion.

Part 2: Review Instance Labels

  1. Examine the output mask in slice views — each object has a unique label value.
  2. Use Split Mask to separate instances into individual masks.
  3. Count the number of detected objects.
  4. Scroll through slices to verify that distinct objects are correctly separated.

Part 3: Refine Detection Parameters

  1. If too few objects are detected, lower Cell Prob Threshold to −2.0 and re-run.
  2. If too many false positives appear, raise Cell Prob Threshold to 2.0.
  3. Enable Exclude on Edges to remove incomplete boundary objects.
  4. Adjust Min Size to filter out noise.

Part 4: Generate 3D Surfaces

  1. Select the largest instance mask.
  2. Generate a 3D preview.
  3. Convert the preview to a surface.
  4. Apply remeshing and surface cleanup.
  5. Export the surface in STL format.

Best Practices

Before Running AI Segmentation

  • Verify correct image orientation
  • Check that spacing metadata is accurate
  • Ensure adequate image quality
  • Crop to region of interest if processing time is a concern

During Processing

  • Monitor progress for errors
  • Don't run multiple simultaneous segmentations
  • Ensure sufficient disk space for temporary files

After Processing

  • Always review AI results manually
  • Document any corrections made
  • Validate against expert knowledge
  • Consider inter-observer validation for research

For Research Use

  • Document the specific model version used
  • Record all processing parameters
  • Maintain audit trail of modifications
  • Cite appropriate references for the AI framework

Troubleshooting

Framework not available

  • Verify installation completed successfully
  • Restart Volvicon after installation
  • Check for error messages in installation log

Segmentation fails

  • Verify input data format (NIfTI for most AI tools)
  • Check available memory
  • Try CPU mode if GPU fails
  • Review error messages in output log

Results are poor

  • Verify correct modality (CT for TotalSegmentator)
  • Check image quality and contrast
  • Ensure proper orientation
  • Try different model or task
  • For Cellpose-SAM: adjust Flow Threshold and Cell Prob Threshold for sensitivity

Out of memory

  • Use fast mode
  • Crop input volume
  • Close other applications
  • Process on system with more RAM

Next Steps

With AI segmentation skills, continue to:


See Also