Connecting PyCharm to a TensorFlow Docker Container

This guide walks you through setting up PyCharm Professional and Docker, with the goal of developing TensorFlow applications in PyCharm, while executing the code inside of an encapsulated container. After completing the following steps, you will be able to write Python code in PyCharm, and have the execution take place on a container, without any complication.

Prerequisites: Docker and PyCharm Professional (the Community Edition is not sufficient) must be installed.

Start Docker and download the latest TensorFlow Docker image with

docker pull tensorflow/tensorflow

 

Open PyCharm and create a new “Pure Python” project in PyCharm. Leave the interpreter as-is, i.e. “New environment using Virtualenv”.

 

Add a “main.py” file with some TensorFlow test script, for instance the one from the TF install guide:

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

 

PyCharm will complain, saying “No module named tensorflow” (unless TensorFlow is set up on the host machine).

 

Open the PyCharm preferences and navigate to “Project: <project-name>” > “Project Interpreter”. Click on “Add…”

 

Select “Docker” and go with the image “tensorflow/tensorflow”.

The “No module named tensorflow” error should now be gone. PyCharm starts a Docker container in the background, whose python binary does the type-checking.

 

Add a new Python run/debug configuration, where the Python interpreter is set to be “Project Default (Remote Python 2.7.12 Docker (tensorflow/tensorflow:latest))”. Set the values “Script path” and “Working directory” as needed.

 

Hit the “Run” button and the console will output “Hello, TensorFlow!”. Debugging works as well.

Published by

Timo Denk

Software developer at SAP and Denk Development, student of Applied Computer Science at Baden-Württemberg Cooperative State University. Interested in programming, math, microcontrollers, and sports.

4 thoughts on “Connecting PyCharm to a TensorFlow Docker Container”

    1. Hi Nicholas,

      Consuming GPUs in containers is currently only supported on Windows/Linux, due to nvidia-docker.

      I may help you out here with a couple of steps:

      0) Install followings:
      – GNU/Linux x86_64 with kernel version > 3.10
      – Docker >= 1.12
      – NVIDIA GPU with Architecture > Fermi (2.1)
      – NVIDIA drivers ~= 361.93 (untested on older versions)

      From: https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0)#prerequisites

      1) Make sure that you have installed nvidia-docker. The installation is a post-installation step after installing Docker. Since nvidia-docker uses hooks to integrate, sure that you match the versions between Docker and the runtime nvidia-docker. (https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#which-docker-packages-are-supported)

      You may test it with following command:

      nvidia-docker version

      2) Make sure the current user is in the usergroup ‘docker’, otherwise you’ll need sudo to do things with Docker, which can be avoided.

      3) Check if runtime is registered at /etc/docker/daemon.json as

      {
          "runtimes": {
              "nvidia": {
                  "path": "/usr/bin/nvidia-container-runtime",
                  "runtimeArgs": []
              }
          }
      }

      From https://github.com/nvidia/nvidia-container-runtime#docker-engine-setup

      4) You may now access the GPU from the container, but keep that in mind that Tensorflow still needs compiled CUDA libs. You error is being thrown because somehow it cannot access it.
      You Dockerfile should include at least these lines:

      FROM tensorflow/tensorflow:latest-gpu-py3
      
      # add CUDA path
      RUN export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

      This might solve the error.

      5) Afterwards you start the container as following:

      docker run -d -p 6006:6006 --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0

      I hope that helps!

Leave a Reply

Your email address will not be published. Required fields are marked *