dense_vnet_abdominal_ct_model_zoo.md 3.34 KB
Newer Older
1 2
# Automatic multi-organ segmentation on abdominal CT with dense v-networks

Wenqi Li's avatar
Wenqi Li committed
3
This page describes how to acquire and use the network described in
4 5 6 7 8 9 10 11 12 13 14 15

Eli Gibson, Francesco Giganti, Yipeng Hu, Ester Bonmati, Steve
Bandula, Kurinchi Gurusamy, Brian Davidson, Stephen P. Pereira,
Matthew J. Clarkson and Dean C. Barratt (2017), Automatic multi-organ
segmentation on abdominal CT with dense v-networks (submitted to IEEE TMI)

This network segments eight organs on abdominal CT, comprising the
gastointestinal tract (esophagus, stomach, duodenum), the pancreas, and
nearby organs (liver, gallbladder, spleen, left kidney).

## Downloading model zoo files

Wenqi Li's avatar
Wenqi Li committed
16
The network weights and examples data can be downloaded with the command
Wenqi Li's avatar
Wenqi Li committed
17
```bash
Wenqi Li's avatar
Wenqi Li committed
18
net_download dense_vnet_abdominal_ct_model_zoo
Wenqi Li's avatar
Wenqi Li committed
19
```
20

Wenqi Li's avatar
Wenqi Li committed
21
(Replace `net_download` with `python net_download.py` if you cloned the NiftyNet repository.)
Wenqi Li's avatar
Wenqi Li committed
22

23

Wenqi Li's avatar
Wenqi Li committed
24 25 26 27 28 29 30 31 32 33 34
Alternatively, you can manually download:
- [model zoo code](https://www.dropbox.com/s/ptu46os7lfmj0dl/dense_vnet_abdominal_ct_code_config.tar.gz?dl=1)
- [trained network weights](https://www.dropbox.com/s/zvc8stqo6womvou/dense_vnet_abdominal_ct_weights.tar.gz?dl=1)
- [example data](https://www.dropbox.com/s/5fk0m9v12if5da9/dense_vnet_abdominal_ct_model_zoo_data.tar.gz?dl=1)

And unzip:
- `dense_vnet_abdominal_ct_code_config.tar.gz.tar.gz` into `~/niftynet/extensions/dense_vnet_abdominal_ct/`
- `dense_vnet_abdominal_ct_weights.tar.gz` into `~/niftynet/models/dense_vnet_abdominal_ct/`
- `dense_vnet_abdominal_ct_model_zoo_data.tar.gz` into `~/niftynet/data/dense_vnet_abdominal_ct/`

Make sure that the model directory (`~/niftynet/extensions/` by default) is on the `PYTHONPATH`.
35 36 37

## Generating segmentations for example data

Wenqi Li's avatar
Wenqi Li committed
38
Generate segmentations for the included example image with the command
Wenqi Li's avatar
Wenqi Li committed
39
```bash
Wenqi Li's avatar
Wenqi Li committed
40
net_segment inference -c ~/niftynet/extensions/dense_vnet_abdominal_ct/config.ini
Wenqi Li's avatar
Wenqi Li committed
41
```
Wenqi Li's avatar
Wenqi Li committed
42
Replace `net_segment` with `python net_segment.py` if you cloned the NiftyNet repository.
Wenqi Li's avatar
Wenqi Li committed
43

44 45 46 47 48 49 50 51 52 53 54

## Generating segmentations for your own data

### Preparing data
The network takes as input abdominal CT images that are cropped to the region of interest: to the rib-cage and abdominal cavity transversely, to the superior extent of the liver or spleen and the inferior extent of the liver or kidneys.

Images should be in Hounsfield units, with voxels outside the CT
field-of-view set to -1000.

### Editing the configuration file

Wenqi Li's avatar
Wenqi Li committed
55
Make a copy of the configuration file `~/niftynet/extensions/dense_vnet_abdominal_ct/config.ini` to a location of your choice.
56 57 58 59 60 61 62
You may need to change the `path_to_search` and `filename_contains` lines in the configuration file to point to the correct paths for your images. You can also change the `save_seg_dir` setting to change where the segmentations are saved.

### Generating segmentations

Generate segmentations with the command `net_segment inference -c edited_config.ini`, replacing `edited_config.ini` with the path to the new configuration file. Segmentations will be saved in the path specified by the `save_seg_dir` setting with names corresponding to your input file names, with a `_niftynet_out.nii.gz` suffix.


Wenqi Li's avatar
Wenqi Li committed
63 64 65 66 67 68 69 70 71

Please Note:

* To achieve an efficient parcellation, a GPU with at least 10GB memory is required.

* Please change the environment variable `CUDA_VISIBLE_DEVICES` to an appropriate value if necessary (e.g., `export CUDA_VISIBLE_DEVICES=0` will allow NiftyNet to use the `0`-th GPU).