Connecting to TensorBoard of your Google Cloud VM via port-forwarding
Sept. 7, 2018
In the last post we learned how to write a bash script that creates a new gcloud Deep Learning VM based on Google Cloud's pre-configured Deep Learning images. As a last step, we connected to the VM's JupyterLab.
In this article, we will learn how to run TensorBoard on our VM and open it in our local PC's browser. We will do this again via port-forwarding.
Connect to your gcloud instance and start TensorBoard
Continuing from the last post, we created a reverse tunnel on port 8080, like so:
$ gcloud compute ssh tf-n1-highmem-2-k80-count-1 --zone=us-central1-c -- -L 8080:localhost:8080
Enter passphrase for key '/Users/apoehlmann/.ssh/google_compute_engine':
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Welcome to the Google Deep Learning VM
Based on: Debian GNU/Linux 9.5 (stretch) (GNU/Linux 4.9.0-8-amd64 x86_64\n)
* Google Deep Learning Platform StackOverflow: https://stackoverflow.com/questions/tagged/google-dl-platform
* Google Cloud Documentation: https://cloud.google.com/deep-learning-vm
* Google Group: https://groups.google.com/forum/#!forum/google-dl-platform
TensorFlow comes pre-installed with this image. To install TensorFlow binaries in a virtualenv (or conda env),
please use the binaries that are pre-built for this image. You can find the binaries at
Note that public TensorFlow binaries may not work with this image.
Linux tf-n1-highmem-2-k80-count-1 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4 (2018-08-21) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
We can now open JupyterLab in our browser (http://localhost:8080/), create a new notebook and create a dummy graph for TensorBoard:
import tensorflow as tf
a = tf.constant(2, name='a')
b = tf.constant(3, name='b')
x = tf.add(a, b, name='add')
# if you prefer creating your writer using the default graph
# writer = tf.summary.FileWriter('./graphs', tf.get_default_graph())
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
# creating your writer with the session graph
writer = tf.summary.FileWriter('./graphs', sess.graph)
Here we specify the TensorBoard folder, named graphs, to be located within the folder where JupyterLab is also running, which is: /opt/deeplearning/workspace/tutorials
After running the code / notebook cell, we can now start TensorBoard via:
apoehlmann@tf-n1-highmem-2-k80-count-1:~$ cd /opt/deeplearning/workspace/tutorials/
apoehlmann@tf-n1-highmem-2-k80-count-1:/opt/deeplearning/workspace/tutorials$ tensorboard --logdir="./graphs/" --port 6006
/usr/lib/python3/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
TensorBoard 1.10.0 at http://tf-n1-highmem-2-k80-count-1:6006 (Press CTRL+C to quit)
Note: Do not forget to first cd to the JupyterLab directory. Otherwise instead of the relative path "./graphs/" you have to specify the absolute path.
Port-forwarding TensorBoard to your local PC
So now we have TensorBoard running on the VM's port 6006. In order to open TensorBoard within our local PC's browser, we will create another SSH tunnel and forward the remote port to our local PC's port 6006. For this, open another Terminal and type
$ gcloud compute ssh tf-n1-highmem-2-k80-count-1 --zone=us-central1-c -- -NfL 6006:localhost:6006
Enter passphrase for key '/Users/administrator/.ssh/google_compute_engine':
The actual SSH command needs to be separated by "--". We specify the -NfL flag which will make the process run in the background. So unlike the previous SSH command where we only specified the -L flag, this time we will not be connected to the command line of our VM but instead stay within our local PC's terminal.