Commit 4094bbb1 authored by Eli GIBSON's avatar Eli GIBSON

Added readme files for model zoo

parent 9ed4b4fa
# Automatic multi-organ segmentation on abdominal CT with dense v-networks
This page describes how to acquire and use the network described in
Eli Gibson, Francesco Giganti, Yipeng Hu, Ester Bonmati, Steve
Bandula, Kurinchi Gurusamy, Brian Davidson, Stephen P. Pereira,
Matthew J. Clarkson and Dean C. Barratt (2017), Automatic multi-organ
segmentation on abdominal CT with dense v-networks (submitted to IEEE TMI)
This network segments eight organs on abdominal CT, comprising the
gastointestinal tract (esophagus, stomach, duodenum), the pancreas, and
nearby organs (liver, gallbladder, spleen, left kidney).
## Downloading model zoo files
The network weights and examples data can be downloaded with the command
`net_download dense_vnet_abdominal_ct_model_zoo dense_vnet_abdominal_ct_model_zoo_data`. Replace `net_download` with `python net_download.py` if you cloned the NiftyNet repository.
Make sure that the model directory (`~/niftynet/models/` by default) is on the PYTHONPATH.
## Generating segmentations for example data
Generate segmentations for the included example image with the command `net_segment inference -c ~/niftynet/models/dense_vnet_abdominal_ct_model_zoo/config.ini`. Replace `net_segment` with `python net_segment.py` if you cloned the NiftyNet repository. Replace `~/niftynet/` if you specified a custom download path in the `net_download` command.
## Generating segmentations for your own data
### Preparing data
The network takes as input abdominal CT images that are cropped to the region of interest: to the rib-cage and abdominal cavity transversely, to the superior extent of the liver or spleen and the inferior extent of the liver or kidneys.
Images should be in Hounsfield units, with voxels outside the CT
field-of-view set to -1000.
### Editing the configuration file
Make a copy of the configuration file `~/niftynet/models/dense_vnet_abdominal_ct_model_zoo/config.ini` to a location of your choice.
You may need to change the `path_to_search` and `filename_contains` lines in the configuration file to point to the correct paths for your images. You can also change the `save_seg_dir` setting to change where the segmentations are saved.
### Generating segmentations
Generate segmentations with the command `net_segment inference -c edited_config.ini`, replacing `edited_config.ini` with the path to the new configuration file. Segmentations will be saved in the path specified by the `save_seg_dir` setting with names corresponding to your input file names, with a `_niftynet_out.nii.gz` suffix.
# Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks
This page describes how to acquire and use the network described in
Yipeng Hu, Eli Gibson, Li-Lin Lee, Weidi Xie, Dean C. Barratt, Tom Vercauteren, J. Alison Noble
(2017). [Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks](https://arxiv.org/abs/1707.05392), In MICCAI RAMBO 2017
## Downloading model zoo file and conditioning data
The network weights and examples data can be downloaded with the command
`net_download ultrasound_simulator_gan_model_zoo ultrasound_simulator_gan_model_zoo_data`. Replace `net_download` with `python net_download.py` if you cloned the NiftyNet repository.
Make sure that the model directory (`~/niftynet/models/` by default) is on the PYTHONPATH.
This network generates ultrasound images conditioned by a coordinate map. Some example coordinate maps are included in the model zoo data. Additional examples are available [here](https://www.dropbox.com/s/w0frdlxaie3mndg/test_data.tar.gz?dl=0)).
## Generating segmentations for example data
Generate segmentations for the included example conditioning data with the command `net_gan inference -c ~/niftynet/models/ultrasound_simulator_gan_model_zoo/config.ini`. Replace `net_segment` with `python net_gan.py` if you cloned the NiftyNet repository. Replace `~/niftynet/` if you specified a custom download path in the `net_download` command.
## Generating segmentations for additional conditioning data
## Editing the configuration file
Make a copy of the configuration file `~/niftynet/models/ultrasound_simulator_gan_model_zoo/config.ini` to a location of your choice.
You may need to change the `path_to_search` and `filename_contains` lines in the configuration file to point to the correct paths for your conditioning data. You can also change the `save_seg_dir` setting to change where the segmentations are saved.
## Generating samples
Generate samples from the simulator with the command `net_gan.py inference -c edited_config.ini`, replacing `edited_config.ini` with the path to the new configuration file. Sets of simulated US images interpolated between two samples will be generated in the path specified by the `save_seg_dir` setting with names of the form `k_id_niftynet_generated.nii.gz`, where `k` is the interpolation index 0-9 and `id` is the frame code from the input conditioning data filename.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment