Navigation

Setting up a Jupyter-Notebook Image for Deep-Learning

Hi folks,

Update :8.12.2018: I have updated the image a bit it now contains CUDA 9.2 andTensorflow 1.12. The rest is almost the same…

today I would like to show you how to set up a Jupyter Notebook server with GPU support with the help of docker and docker-compose.

The image we will be using contains tensorflow 1.5 and pytorch with GPU support as well as Keras.

There is also a nice to have tensorboard visulaizer inside which comes from this gitHub repo –> see here

The Docker Image can also be downloaded from the Docker hub –> if you want feel free to use this Image: digitalanatomist/deeplearning-tensorboard:latest

–> new version digitalanatomist/dlm_ubuntu-16.04_cuda-9.2

Some additional packages are also inside.

If you don’t want to use the build image feel free to exclude or add anything you (don’t) need in the Dockerfile and build it yourself.

The Image also makes use of the latest Nividia CUDA Driver(9.2) and CUDNN (7).

I will provide all the files needed to set things up as well as the Dockerfile for the image. –> see here

Prerequesits:

nvidia-docker with GPU support up and running –> previous post

Ok now here we go:

First define all dependencies in the docker-compose.yml:

version:               '3'


services:
#tensorflow
  deeplearning:
      image:           digitalanatomist/deeplearning-tensorboard
      #devices:        # -->remember this is only needed for nvidia-docker version 1
      #  - /dev/nvidiactl
        #- /dev/nvidia-uvm
      #  - /dev/nvidia0 #in general: /dev/nvidia# where # depends on which gpu card is wanted to be used
      deploy:
        placement:
          constraints: [node.role == manager]
      ports:
        - "5000:5000"
        - "8888:8888"
        - "6006:6006"

      logging:
        driver:        "json-file"

      networks:
        - network
      volumes:
        - "./notebooks:/notebooks" #--> this is my workingdirectory and folder from host to container mapping use what ever you want
        #- "nvidia_driver_387.34:/usr/local/nvidia:ro"  #-->remember this is only needed for nvidia-docker version 1
      environment:
        - NVIDIA_VISIBLE_DEVICES all
        - NVIDIA_DRIVER_CAPABILITIES compute,utility

## Driver Volume for CUDA Version -->remember this is only needed for nvidia-docker version 1
#volumes:
#  nvidia_driver_387.34:
  #  external:         true

networks:
  network:
    driver:            overlay
    ipam:
      driver:          default
      config:
        - subnet:      120.00.000.0/24

As you can see (if you looked at the previous post) this .yml file shows both ways, how to define everythin with nvidia-docker1 or version 2.

You could stop here and run:

docker-compose up

If you would like to build the image yourself use the Dockerfileavailable at github.

If you use the following command in the directory of the Dockerfile:
docker build -t repo/name:tag .

The Image will be build and named accordingly (if no tag is provided it will be tagged latest by default)

That’s all for now if you whish to see how to use such an Image with Jupyterhub as a docker service have a look at this article.

All the code is available here.

Avatar

Addition informations