Officially, ROS kinetic runs with Python 2.7. In this tutorial, you will set up a Docker container that has ROS kinetic set up with Python 3. And because that is not enough, it will also support TensorFlow with GPU.

Using tmux, you will be able to train your deep learning models in the background of your containers without having to be attached to them. That is, you can spawn multiple containers running with different sets of parameters without having to worry about the host's underlying OS and CUDA environment setup.

Prerequisits

Linux server with GPU, Docker and nvidia-docker installed on it.

Note: I created a shell script that sets up the whole environment of this post, however with TensorFlow CPU instead of GPU. Have a look at the next blog post here.

TLDR: pull & build

A Dockerfile, docker-compose.yml, entrypoint.sh and dummy ROS node can be pulled from my bitbucket repo https://bitbucket.org/reinka/blog/src/master/ via

git clone https://reinka@bitbucket.org/reinka/blog.git

The directory docker_ros_tf contains all files of this tutorial.

$ cd blog/docker_ros_tf
# make scripts executable
blog/docker_ros_tf $ chmod +x entrypoint.sh scripts/talker.py
# build images and container
blog/docker_ros_tf $ docker-compose up --build -d
# enter the container
blog/docker_ros_tf $ docker exec -it docker_ros_tf_ros_1 bash

You should be now inside the container. Next we build the dummy ROS package that was created via the entrypoint.sh:

root@<some_hex_number>:/notebooks $ ll
total 420
drwxr-xr-x 1 root root   4096 Mar  3 17:32 ./
drwxr-xr-x 1 root root   4096 Mar  3 22:35 ../
-rw-rw-r-- 1 root root  25033 Nov  5 19:38 1_hello_tensorflow.ipynb
-rw-rw-r-- 1 root root 164559 Nov  5 19:38 2_getting_started.ipynb
-rw-rw-r-- 1 root root 209951 Nov  5 19:38 3_mnist_from_scratch.ipynb
-rw-rw-r-- 1 root root    119 Nov  5 19:38 BUILD
-rw-rw-r-- 1 root root    586 Nov  5 19:38 LICENSE
drwxr-xr-x 1 root root   4096 Mar  3 17:32 workspace/

# cd into the workspace and build the dummy ROS node
root@<some_hex_number>:/notebooks $ cd workspace && catkin_make

Finally, we start roscore and the talker.py node, each inside its own tmux session:

root@<some_hex_number>:/notebooks/workspace $ tmux new-session -s roscore

Inside the newly opened terminal, start roscore:

root@<some_hex_num>:/notebooks/workspace# roscore
... logging to /root/.ros/log/5a60d514-3e05-11e9-943b-0242ac150002/roslaunch-d43f1a78b72b-363.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://d43f1a78b72b:41279/
ros_comm version 1.12.14


SUMMARY
========

PARAMETERS
 * /rosdistro: kinetic
 * /rosversion: 1.12.14

NODES

auto-starting new master
process[master]: started with pid [373]
ROS_MASTER_URI=http://d43f1a78b72b:11311/

setting /run_id to 5a60d514-3e05-11e9-943b-0242ac150002
process[rosout-1]: started with pid [386]
started core service [/rosout]

Press CTRL + B + D (B followed by D) to detach from the terminal.

Now we do the same thing to start the ROS node:

root@<some_hex_num>:/notebooks/workspace $ tmux new-session -s talker

# source devel/setup.bash first
root@<some_hex_num>:/notebooks/workspace $ source devel/setup.bash
# start the node
root@<some_hex_num>:/notebooks/workspace $ rosrun tutorial talker.py
2019-03-03 22:50:19.469130: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-03 22:50:19.668132: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1060 Ti 6GB major: 6 minor: ... # some more output following
[INFO] [1551653420.010619]: hello world [-0.00689075]
[INFO] [1551653420.026317]: hello world [0.5836483]
[INFO] [1551653420.124730]: hello world [-0.99160147]
[INFO] [1551653420.224980]: hello world [0.89415735]
[INFO] [1551653420.325174]: hello world [1.2941247]
[INFO] [1551653420.425319]: hello world [-1.0054873]
[INFO] [1551653420.527178]: hello world [0.0699743]

That's it. You now have a Docker container setup with ROS using Python 3 and TensorFlow GPU.

Explanation

The Dockerfile contains all commands for

  1. setting up TensorFlow GPU.
  2. creating a workspace
  3. installing ROS kinetic and some Python packages

The first point happens by making our image inherit from tensorflow/tensorflow:latest-gpu-py3.

The docker-compose.yml file is responsible for bind mounting the entrypoint.sh and the scripts directory into the container. Bind mounting allows us to make changes on those files on the host machine, while having all changes being reflected in your container. An alternative would be to declare COPY commands in the Dockerfile however any changes to these files would be required to happen inside the Docker container and those changes would be isolated from the original files on the host machine. I use COPY for static files which do not need to change and bind mount those files which are under development.

To avoid having to set up NVIDIA CUDA manually, we use the NVIDIA Container Runtime. This is specified in side the docker-compose.yml.

For the runtime: nvidia command to work inside the yaml file, we need it to be version: '2.3'. tty: true makes sure, the container does not get shutdown instantly after the docker-compose up command.

The entrypoint.sh gets executed, whenever a container gets started. It checks, whether the dummy ROS package tutorial already exists, and if not, it creates it. That means, it happens only at the first start-up.

Getagged mit:
TensorFlow Docker ROS
blog comments powered by Disqus