Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
NiftyNet
NiftyNet
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 48
    • Issues 48
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Operations
    • Operations
    • Incidents
  • Analytics
    • Analytics
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • CMIC
  • NiftyNetNiftyNet
  • Issues
  • #253

Closed
Open
Opened Mar 27, 2018 by Wenqi Li@wenqiliMaintainer

improving inference efficiency

Some of the saved checkpoint files could be further optimised to improve inference speed and model footprint (by freeze_graph and strip_unused_nodes), this could be considered as a postprocessing step for the trained model in NiftyNet.

https://stackoverflow.com/questions/45075299/tensorflow-how-to-reduce-memory-footprint-for-inference-only-models

https://www.tensorflow.org/extend/tool_developers/#freezing

https://www.tensorflow.org/mobile/optimizing

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
Reference: CMIC/NiftyNet#253