Commit b3b6c369 by Wenqi Li

Merge branch…

Merge branch '69-push-to-pip-repository-after-commit-to-master-off-dev-restructure-hierarchy' into 'dev'

Resolve "Publish NiftyNet v0.1 on Python Package Index (PyPI)"

Closes #69 and #125

See merge request !42
parents 488b9f29 5a8e91a3
Pipeline #7865 passed with stages
in 23 minutes 6 seconds
......@@ -37,28 +37,28 @@ testjob:
# run python code with coverage wrapper
- coverage erase
- coverage run -a --source . net_segmentation.py train -c config/highres3dnet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2
- coverage run -a --source . net_segmentation.py inference -c config/highres3dnet_config.ini --batch_size 8 --image_size 64 --label_size 64 --queue_length 32
- coverage run -a --source . net_segment.py train -c config/highres3dnet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2
- coverage run -a --source . net_segment.py inference -c config/highres3dnet_config.ini --batch_size 8 --image_size 64 --label_size 64 --queue_length 32
- coverage run -a --source . net_segmentation.py train -c config/scalenet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2
- coverage run -a --source . net_segmentation.py inference -c config/scalenet_config.ini --batch_size 16 --image_size 64 --label_size 64 --queue_length 32
- coverage run -a --source . net_segment.py train -c config/scalenet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2
- coverage run -a --source . net_segment.py inference -c config/scalenet_config.ini --batch_size 16 --image_size 64 --label_size 64 --queue_length 32
- coverage run -a --source . net_segmentation.py train -c config/vnet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2 --activation_function relu
- coverage run -a --source . net_segmentation.py inference -c config/vnet_config.ini --batch_size 16 --image_size 64 --label_size 64 --queue_length 32 --activation_function relu
- coverage run -a --source . net_segment.py train -c config/vnet_config.ini --batch_size 1 --image_size 32 --label_size 32 --queue_length 5 --num_threads 2 --activation_function relu
- coverage run -a --source . net_segment.py inference -c config/vnet_config.ini --batch_size 16 --image_size 64 --label_size 64 --queue_length 32 --activation_function relu
# need a large GPU to run
#- coverage run -a --source . net_segmentation.py train -c config/unet_config.ini --batch_size 1 --image_size 96 --label_size 96 --queue_length 5 --num_threads 2
#- coverage run -a --source . net_segmentation.py inference -c config/unet_config.ini --batch_size 1 --image_size 96 --label_size 96 --queue_length 5
#- coverage run -a --source . net_segment.py train -c config/unet_config.ini --batch_size 1 --image_size 96 --label_size 96 --queue_length 5 --num_threads 2
#- coverage run -a --source . net_segment.py inference -c config/unet_config.ini --batch_size 1 --image_size 96 --label_size 96 --queue_length 5
#- coverage run -a --source . net_segmentation.py train -c config/deepmedic_config.ini --batch_size 8 --queue_length 16 --num_threads 2
#- coverage run -a --source . net_segmentation.py inference -c config/deepmedic_config.ini --batch_size 64 --queue_length 96
#- coverage run -a --source . net_segment.py train -c config/deepmedic_config.ini --batch_size 8 --queue_length 16 --num_threads 2
#- coverage run -a --source . net_segment.py inference -c config/deepmedic_config.ini --batch_size 64 --queue_length 96
- coverage run -a --source . net_segmentation.py train -c config/default_config.ini --image_size 42 --label_size 42 --batch_size 3 --queue_length 6
- coverage run -a --source . net_segmentation.py train -c config/default_config.ini --image_size 42 --label_size 42 --batch_size 3 --queue_length 6 --starting_iter 10 --max_iter 15
- coverage run -a --source . net_segmentation.py inference -c config/default_config.ini --image_size 84 --label_size 84 --batch_size 7 --queue_length 14
- coverage run -a --source . net_segment.py train -c config/default_config.ini --image_size 42 --label_size 42 --batch_size 3 --queue_length 6
- coverage run -a --source . net_segment.py train -c config/default_config.ini --image_size 42 --label_size 42 --batch_size 3 --queue_length 6 --starting_iter 10 --max_iter 15
- coverage run -a --source . net_segment.py inference -c config/default_config.ini --image_size 84 --label_size 84 --batch_size 7 --queue_length 14
- coverage run -a --source . net_segmentation.py train -c config/default_multimodal_config.ini --image_size 42 --label_size 42 --batch_size 3
- coverage run -a --source . net_segmentation.py inference -c config/default_multimodal_config.ini --image_size 84 --label_size 84 --batch_size 7
- coverage run -a --source . net_segment.py train -c config/default_multimodal_config.ini --image_size 42 --label_size 42 --batch_size 3
- coverage run -a --source . net_segment.py inference -c config/default_multimodal_config.ini --image_size 84 --label_size 84 --batch_size 7
- coverage run -a --source . -m tests.mean_variance_normalisation_test
- coverage run -a --source . -m tests.binary_masking_test
......@@ -103,9 +103,9 @@ testjob:
pip-installer:
stage: pip_test
only:
- 69-push-to-pip-repository-after-commit-to-master-off-dev
- 69-push-to-pip-repository-after-commit-to-master-off-dev-restructure-hierarchy
- 131-pip-bundle-does-not-work-with-cpu-only-tensorflow-drop-tensorflow-auto-install
- master
- dev
- dev-staging
- tags
script:
# source utils
......@@ -154,8 +154,8 @@ pip-installer:
- python $package_importer
# test niftynet command
- ln -s /home/gitlab-runner/environments/niftynet/data/example_volumes ./example_volumes
- net_segmentation train -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 42 --label_size 42 --batch_size 1 --save_every_n 10
- net_segmentation inference -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 80 --label_size 80 --batch_size 8
- net_segment train -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 42 --label_size 42 --batch_size 1 --save_every_n 10
- net_segment inference -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 80 --label_size 80 --batch_size 8
# deactivate virtual environment
- deactivate
- cd $niftynet_dir
......@@ -188,8 +188,8 @@ pip-installer:
- python $package_importer
# test niftynet command
- ln -s /home/gitlab-runner/environments/niftynet/data/example_volumes ./example_volumes
- net_segmentation train -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 42 --label_size 42 --batch_size 1 --save_every_n 10
- net_segmentation inference -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 80 --label_size 80 --batch_size 8
- net_segment train -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 42 --label_size 42 --batch_size 1 --save_every_n 10
- net_segment inference -c $niftynet_dir/config/default_config.ini --net_name toynet --image_size 80 --label_size 80 --batch_size 8
# deactivate virtual environment
- deactivate
- cd $niftynet_dir
......
......@@ -52,9 +52,9 @@
"metadata": {},
"source": [
"## Training a network from the command line\n",
"The simplest way to use NiftyNet is via the commandline net_segmentation.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory:\n",
"The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory:\n",
"\n",
"```python net_segmentation.py train --conf demo/PROMISE12/promise12_demo_train_config.ini --image_size 32 --label_size 32 --max_iter 10```\n",
"```python net_segment.py train --conf demo/PROMISE12/promise12_demo_train_config.ini --image_size 32 --label_size 32 --max_iter 10```\n",
"\n",
"Notice that we use configuration file that is specific to this experiment. This file contains default settings. Also note that we can override these settings on the command line.\n",
"\n",
......@@ -78,7 +78,7 @@
"source": [
"Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command\n",
"\n",
"```python net_segmentation.py inference --conf demo/PROMISE12/promise12_demo_inference_config.ini --image_size 32 --label_size 32```\n",
"```python net_segment.py inference --conf demo/PROMISE12/promise12_demo_inference_config.ini --image_size 32 --label_size 32```\n",
"\n",
"or the following python code in the Notebook"
]
......@@ -102,7 +102,7 @@
"source": [
"Otherwise, you can load up some pre-trained weights for the network:\n",
"\n",
"```python net_segmentation.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained```\n",
"```python net_segment.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained```\n",
"or the following python code in the Notebook"
]
},
......@@ -318,7 +318,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use helper functions to parse commandline parameters and automatically match patient data (which is what net_segmentation.py did)."
"You can use helper functions to parse commandline parameters and automatically match patient data (which is what net_segment.py did)."
]
},
{
......
......@@ -28,7 +28,7 @@ To train a "toynet" specified in `network/toynet.py`:
cd NiftyNet/
wget -N https://www.dropbox.com/s/y7mdh4m9ptkibax/example_volumes.tar.gz
tar -xzvf example_volumes.tar.gz
net_segmentation train --net_name toynet \
net_segment train --net_name toynet \
--image_size 42 --label_size 42 --batch_size 1
```
(GPU computing is enabled by default; to train with CPU only please use `--num_gpus 0`)
......@@ -36,7 +36,7 @@ net_segmentation train --net_name toynet \
After the training process, to do segmentation with a trained "toynet":
``` sh
cd NiftyNet/
net_segmentation inference --net_name toynet \
net_segment inference --net_name toynet \
--save_seg_dir ./seg_output \
--image_size 80 --label_size 80 --batch_size 8
```
......@@ -51,11 +51,11 @@ Alternatively, to run with a customised config file:
``` sh
cd NiftyNet/
# training
net_segmentation train -c /path/to/customised_config
net_segment train -c /path/to/customised_config
# inference
net_segmentation inference -c /path/to/customised_config
net_segment inference -c /path/to/customised_config
```
where `/path/to/customised_config` implements all parameters listed by running:
```sh
net_segmentation -h
net_segment -h
```
......@@ -2,7 +2,7 @@
# please make sure you installed all dependencies of NiftyNet.
# cd NiftyNet/; pip install -r requirements-gpu.txt
NIFTYNET=../../net_segmentation.py
NIFTYNET=../../net_segment.py
DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd "$DIR"
......
......@@ -56,4 +56,4 @@ image_ size = 57, label_ size = 9, d_ factor = 3
1. Create a `niftynet/network/new_net.py` inheriting `BaseNet` from `niftynet.layer.base_net`
1. Implement `layer_op()` function using the building blocks in `niftynet/layer/` or creating new layers
1. Import `niftynet.network.new_net` to the `NetFactory` class in `niftynet/__init__.py`
1. Train the network with `python net_segmentation.py train -c /path/to/customised_config`
1. Train the network with `python net_segment.py train -c /path/to/customised_config`
......@@ -66,7 +66,7 @@ Acknowledgements
This project is grateful for the support from the `Wellcome Trust`_, the `Engineering and Physical Sciences Research Council (EPSRC)`_, the `National Institute for Health Research (NIHR)`_, the `Department of Health (DoH)`_, `University College London (UCL)`_, the `Science and Engineering South Consortium (SES)`_, the `STFC Rutherford-Appleton Laboratory`_, and `NVIDIA`_.
.. _`TensorFlow`: https://www.tensorflow.org/
.. _`Wellcome EPSRC Centre for Interventional and Surgical Sciences`: http://www.ucl.ac.uk/surgical-interventional-sciences
.. _`Wellcome EPSRC Centre for Interventional and Surgical Sciences`: http://www.ucl.ac.uk/weiss
.. _`NiftyNet source code repository`: https://cmiclab.cs.ucl.ac.uk/CMIC/NiftyNet
.. _`Centre for Medical Image Computing`: http://cmic.cs.ucl.ac.uk/
.. _`Centre for Medical Image Computing (CMIC)`: http://cmic.cs.ucl.ac.uk/
......
......@@ -2,7 +2,7 @@
import warnings
import time
warnings.simplefilter('always', DeprecationWarning)
warnings.warn('run_application.py is deprecated and will be removed; please use net_segmentation.py instead.', DeprecationWarning, stacklevel=2)
warnings.warn('run_application.py is deprecated and will be removed; please use net_segment.py instead.', DeprecationWarning, stacklevel=2)
warnings.simplefilter('ignore', DeprecationWarning)
time.sleep(3)
......
......@@ -26,7 +26,7 @@ info_module.write('\n')
info_module.write('"""\n')
info_module.write('\n')
info_module.write('\n')
info_module.write('version = "{}"\n'.format(version_buf))
info_module.write('VERSION_DESCRIPTOR = "{}"\n'.format(version_buf))
info_module.close()
# Regex for checking PEP 440 conformity
......@@ -121,7 +121,7 @@ setup(
entry_points={
'console_scripts': [
'net_segmentation=niftynet:main',
'net_segment=niftynet:main',
],
},
)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment