Skip to content

Running code using Docker

Walter Hugo Lopez Pinaya edited this page Jan 3, 2020 · 3 revisions

Docker uses containers to create virtual environments that isolate the repository execution from the rest of the system. Scripts are run within this virtual environment that can share resources with its host machine (access directories, use the GPU, connect to the Internet, etc.). In this study, we based our docker image on the TensorFlow Docker images.

Docker execution requirements

  1. Install Docker on your local host machine.
  2. For GPU support on Linux, install NVIDIA Docker support.

Note: To run the docker command without sudo, create the docker group and add your user. For details, see the post-installation steps for Linux.

Build the Docker image

In order to build the docker image, execute the following command in the project folder

$ docker build -f ./Dockerfile -t tf .

In case you are not using GPU-support, please alter the first line of the Docker file from

FROM tensorflow/tensorflow:latest-gpu-py3

to

FROM tensorflow/tensorflow:latest-py3

GPU support

Docker is the easiest way to run our TensorFlow scripts on a GPU since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).

Install the Nvidia Container Toolkit to add NVIDIA® GPU support to Docker. nvidia-container-runtime is only available for Linux. See the nvidia-container-runtime platform support FAQ for details.

Check if a GPU is available:

lspci | grep -i nvidia

Verify your nvidia-docker installation:

docker run --gpus all --rm nvidia/cuda nvidia-smi
Clone this wiki locally