Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
NiftyNet
NiftyNet
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 48
    • Issues 48
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Operations
    • Operations
    • Incidents
  • Analytics
    • Analytics
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • CMIC
  • NiftyNetNiftyNet
  • Issues
  • #263

Closed
Open
Opened Apr 20, 2018 by Wenqi Li@wenqiliMaintainer

multi-gpu inference

Currently model inference only runs on single GPU.

Inference using multiple GPUs could be done by splitting the list of images in dataset_to_infer into subsets, distributing subsets of dataset_to_infer to multiple GPUs.

(In an image window based inference, for each image the tasks of inference on window can also be distributed to multiple GPUs, but it can be difficult to implement/maintain.)

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
Reference: CMIC/NiftyNet#263