-
Notifications
You must be signed in to change notification settings - Fork 7
Running code using Docker
Docker uses containers to create virtual environments that isolate the repository execution from the rest of the system. Scripts are run within this virtual environment that can share resources with its host machine (access directories, use the GPU, connect to the Internet, etc.). In this study, we based our docker image on the TensorFlow Docker images.
Docker execution requirements
- Install Docker on your local host machine.
- For GPU support on Linux, install NVIDIA Docker support.
Note: To run the docker command without sudo, create the docker group and add your user. For details, see the post-installation steps for Linux.
In order to build the docker image, execute the following command in the project folder
$ docker build -f ./Dockerfile -t tf .
In case you are not using GPU-support, please alter the first line of the Docker file from
FROM tensorflow/tensorflow:latest-gpu-py3
to
FROM tensorflow/tensorflow:latest-py3
Docker is the easiest way to run our TensorFlow scripts on a GPU since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).
Install the Nvidia Container Toolkit to add NVIDIA® GPU support to Docker. nvidia-container-runtime is only available for Linux. See the nvidia-container-runtime platform support FAQ for details.
Check if a GPU is available:
lspci | grep -i nvidia
Verify your nvidia-docker installation:
docker run --gpus all --rm nvidia/cuda nvidia-smi