How To Use TensorBoard in TensorFlow Projects

Python 3, Anaconda, TensorFlow



  • Open Windows command prompt by typing “cmd” on the search bar or start menu and pressing enter.
  • Type python to check if python is installed on your PC. If it is installed you will get the following message indicating the version and other parameters
    “Python 3.7.2rc1 (tags/v3.7.2rc1:75a402a217, Dec 11 2018, 23:05:39) [MSC v.1916 64 bit (AMD64)] on win32
    Type “help”, “copyright”, “credits” or “license” for more information.

If it is not installed download the latest stable python version from the website for windows. Install it and follow the process above to check if it’s successfully installed.

  • Install Tensorflow by typing
    Pip3 install –upgrade tensorflow.
    This will take some time. Be patient and wait for the installation to complete.
  • Verify the installation by typing
    Import tensorflow as tf
    If TensorFlow is correctly installed. The command will return no result indicating no errors. If you encounter an error. Follow the process above and try reinstalling TensorFlow.


  • Go to and download the Windows installation package. Click on the downloaded file and follow the prompts to complete the installation process.
  • A window will pop up with a welcome to Anaconda setup message. Click “Next” to accept the terms of the agreement. Click “I Agree”.
  • You will be asked to choose the installation type for all users or just you only. Choose your preferred option and click “Next”.
  • Install it on the default directory or choose another and click “Next”.
  • A window for “Advanced Options” will pop out, check the second option “Register Anaconda as my default Python 3.”
  • Click install to start the installation process. Once the process is completed, you will get a message “Installation Complete. Setup was completed successfully”.
  • Click “Next” and then “Finish”.
  • Go to the Windows start menu and type “anaconda prompt”. Click on Anaconda Prompt to open the program.
  • Type “conda info” on the Anaconda Prompt to get information about the installed package.
  • Create a virtual environment to make an isolated location for TensorFlow by typing “conda create –n myenv”. Note “myenv” is the name of the virtual environment that can be replaced by any name that you prefer. Type “y” for “yes” and press enter key on your keyboard.
  • Activate the virtual environment by typing “activate myenv” and then pressing enter key.
  • Install TensorFlow by typing “conda install tensorflow”. A list of packages will be shown including tensorflow. Type “y” and then press the enter key. Wait for the installation to complete.
  • Verify the installation by typing the code below and then pressing the enter key

Import tensorflow as tf.
If TensorFlow is correctly installed. The command will return no result indicating no errors. If you encounter an error. Follow the process above and try reinstalling tensorflow.


Tensorboard can be installed in two ways:
• Install Tensorflow by typing on the command line the code below and then pressing enter key:
pip install tensorboard
• Install using conda by typing on the command line and then the code below pressing enter key:
Conda install tensorboard


In this example, we will use the tf.keras to classify images using the MNIST dataset. The MNIST dataset contains images of handwritten digits (0 – 9) and is available within the Keras library. tf.keras is a high-level API to build and train models in TensorFlow. tf.keras makes TensorFlow easier to use without sacrificing flexibility and performance. In Keras, you assemble layers to build models. A model is (usually) a graph of layers. The most common type of model is a stack of layers: the tf.keras.Sequential model. It is normally used to verify if an algorithm is working effectively (Source:

  1. Install using conda by typing (on the command line) the code below and then pressing enter key. The ‘conda’ command is available in Anaconda. On Windows click the start menu and type ‘anaconda prompt’. Then click on it to open it the run the command below to install the Keras library.
    conda install Keras (Anaconda) or pip install Keras (Windows cmd)
  2. Import the tensorflow library by typing the code below and then pressing enter key. If tensorflow is not installed make sure it is by following the two
    Import tensorflow as tf
  3. Load the training and test datasets from MNIST by typing the code below and then pressing enter key
mnist = tf.keras.datasets.mnist
  1. Create a sequential model by typing the code below and then pressing enter key. The tf.keras.models. Sequential setups the model for training and configures its learning process by calling the compile method.
  1. Train the model using the method by typing the code below and then pressing enter key.
tf.keras.callback.TensorBoard takes three important arguments:

  • epochs: Training is structured into epochs. An epoch is one iteration over the entire input data (this is done in smaller batches).
  • batch_size: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.
  • validation_data: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.
    When training with Keras’s, adding the tf.keras.callbacks.TensorBoard callback guarantees that logs are created and stored. Furthermore, allow histogram computation every epoch with histogram_freq=1 (this is off by default). Place the logs in a timestamped subdirectory to permit stress-free selection of different training runs.
  1. Enable histogram computing which is disabled by default by typing the code below and then pressing enter key

Example code to create a simple neural network using MNIST dataset:

import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) =
Mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model(): return
model = create_model()
log_dir=”logs/fit/” +“%Y%m%d-%H%M%S”)
tensorboard_callback = 
tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1),
y=y_train, epochs=5, 
validation_data=(x_test, y_test),
  1. Start tensorboard on the local server using the following commands
    Go to the folder where tensorflow is installed by typing the code below and press enter key.
tensorboard –logdir=/path/to/logs/files

Another way to start tensorboard on the command line,

tensorboard --logdir logs/fit

On a jupyter notebook run the same command with “%”

%tensorboard --logdir logs/fit

A brief outline of the tabs in the top navigation bar of the Tensorboard:

  • The Scalars dashboard displays the loss and metrics changes that occur with every respective epoch. The scalar dashboard can be used to track the training speeds, learning rates, and other scalar values.
    The Graphs dashboard aids in visualizing the performance of the model. In this instance, the Keras graph of layers is displayed which helps to ensure the model is built properly.
  • The Distributions and Histograms dashboards display the distribution of a Tensor over a period of time. This is helpful to visualize the weights and biases and to verify that the weights and biases are fluctuating in an expected manner.
  • Tensorboard has a Scalar Dashboard where a user can visualize changes with every epoch. The graph will show the epoch accuracy (epoch_acc) and epoch loss (epoch_loss) which are the training accuracy and training loss while epoch_val_acc and epoch_val_loss are validation data accuracy and validation_data_loss. The lighter lines show exact accuracy or loss while the darker lines show smoothed values.
  • Tensorboard graph section provides graphical representations to analyse the model accuracy. To create a graph; create a session and a TensorFlow FileWriter object. The writer object needs to be passed the location with the summary and sess.graph as arguments.
Writer = tf.summary.FileWriter(STORE_PATH, sess.graph)
  • tf.placeholder() and tf.Variable() methods are used for placeholders and variables respectively in tensorflow. The rounded rectangles represent namespaces, the ovals display the mathematical operations and constants are represented by the small circles.
  • Tensorboard uses dotted ovals or rounded rectangles with dotted lines to decrease disorder in the displayed graphs. These dotted ovals or rounded rectangles are the nodes that are connected to numerous other nodes or all available nodes. So they are displayed as dotted in the graph and their details can be viewed on the upper right corner. The upper right corner also provided linkage to gradients, gradient descents or init nodes.
  • The edges in the graph show the number of tensors going inside and coming out of each node. The graph edges also define the number of tensors flowing in the graph. This aids in recognizing the input and output dimensions from each node which helps in debugging any future problems with the model.
  • The Distributions and Histograms dashboards display the tensor distributions in addition to model weights and biases. This displays the progress of inputs and outputs over time for every respective epoch. There are two viewing options; offset and overlay. The Distribution page displays the statistical distributions while the graph shows the mean and standard deviations.


Other training methods provided in tensorflow include tf.GradientTape() and tf.summary to log or display the required information.

  1. To use the same dataset as above, first, convert it to to be able to take advantage of tensorflow batching capabilities:

train_dataset =, y_train))
test_dataset =, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)
test_dataset = test_dataset.batch(64)
  1. To log loss and optimizer metrics on TensorBoard add the following code:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
  1. To create stateful metrics that can be used to accrue values during training and logged at any point. Keras metrics are functions that are used to calculate the performance of a deep learning model. Metrics are passed during the compiling stage as shown below. You can pass several metrics by comma separating them.
    Define our metrics

train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')

Define the training and test functions:

def train_step(model, optimizer, x_train, y_train):  
  with tf.GradientTape() as tape:    
    predictions = model(x_train, training=True)    
    loss = loss_object(y_train, predictions)  
  grads = tape.gradient(loss, model.trainable_variables)  

  train_accuracy(y_train, predictions)

def test_step(model, x_test, y_test):  
  predictions = model(x_test)  
  loss = loss_object(y_test, predictions)  

  test_accuracy(y_test, predictions)

Common functions in tf.keras.metrics include:

  • tf.keras.metrics.Mean: Computes the (weighted) mean of the given values.
  • tf.keras.metrics.AUC: Computes the approximate AUC (Area under the curve) via a Riemann sum.
  • tf.keras.metrics.SparseCategoricalAccuracy: Calculates how often predictions matches integer labels.
  • tf.keras.metrics.Accuracy: Calculates how often predictions equal labels.
  • tf.keras.metrics.BinaryAccuracy: Calculates how often predictions matches binary labels.
  • tf.keras.metrics.BinaryCrossentropy: Computes the cross-entropy metric between the labels and predictions.
  • tf.keras.metrics.CategoricalAccuracy: Calculates how often predictions matches one-hot labels.
  • tf.keras.metrics.CategoricalCrossentropy: Computes the cross-entropy metric between the labels and predictions.
  1. To set up summary writers to write the summaries to disk in a different logs directory:

current_time ="%Y%m%d-%H%M%S")
train_log_dir = 'logs/gradient_tape/' + current_time + '/train'
test_log_dir = 'logs/gradient_tape/' + current_time + '/test'
train_summary_writer = 
test_summary_writer = tf.summary.create_file_writer(test_log_dir)
  1. To begin training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. This method gives the user control over which metrics to log in and how often to perform the logging sessions.

model = create_model() # reset our model


for epoch in range(EPOCHS):  
  for (x_train, y_train) in train_dataset:    
    train_step(model, optimizer, x_train, y_train)  
  with train_summary_writer.as_default():    
    tf.summary.scalar('loss', train_loss.result(), step=epoch)    
    tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)  

  for (x_test, y_test) in test_dataset:    
    test_step(model, x_test, y_test)  
  with test_summary_writer.as_default():    
    tf.summary.scalar('loss', test_loss.result(), step=epoch)    
    tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)    
  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'  
  print (template.format(epoch+1,                         
  # Reset metrics every epoch  

Other tf.summary functions enable logging of other types of data.
The tf.summary module offers APIs for writing summary data. 
This data can be visualized in TensorBoard.
An example usage:
writer = tf.summary.create_file_writer("/tmp/mylogs")
with writer.as_default():  
   for step in range(100):    
     # other model code would go here    
     tf.summary.scalar("my_metric", 0.5, step=step)    

Example usage with tf.function graph execution:
writer = tf.summary.create_file_writer("/tmp/mylogs")
def my_func(step):  
   # other model code would go here  
   with writer.as_default():    
      tf.summary.scalar("my_metric", 0.5, step=step)

for step in range(100):  

Other tf.summary functions include:

  •…): Write an audio summary.
  • tf.summary.create_file_writer(…): Creates a summary file writer for the given log directory.
  • tf.summary.create_noop_writer(…): Returns a summary writer that does nothing.
  • tf.summary.flush(…): Forces summary writer to send any buffered data to storage.
  • tf.summary.histogram(…): Write a histogram summary.
  • tf.summary.image(…): Write an image summary.
  • tf.summary.record_if(…): Sets summary recording on or off per the provided boolean value.
  • tf.summary.scalar(…): Write a scalar summary.
  • tf.summary.should_record_summaries(…): Returns boolean Tensor which is true if summaries should be recorded.
  • tf.summary.text(…): Write a text summary.
  • tf.summary.trace_export(…): Stops and exports the active trace as a Summary and/or profile file.
  • tf.summary.trace_off(…): Stops the current trace and discards any collected information.
  • tf.summary.trace_on(…): Starts a trace to record computation graphs and profiling information.
  • tf.summary.write(…): Writes a generic summary to the default SummaryWriter if one exists.
  1. Open TensorBoard again to monitor training while it progresses.
    %tensorboard –logdir logs/gradienttape


TensorBoard helps learners visualize the model training by writing summaries like scalars, histograms or images which helps to improve model accuracy in addition to increase the ease of debugging.

TensorBoard aids learners in understanding the background processes taking place in the tensorflow architecture with the help of graphs and histograms.
TensorBoard also provides a hosting platform where learners can host their machine learning projects for free. is a free public service that permits learners to upload their TensorBoard logs and get a permalink that they can use to share their experiments in academic papers, blog posts, social media, etc. This enables better reproducibility and co-operation.

To use, run the following command:

!tensorboard dev upload \  
  --logdir logs/fit \  
  --name "(optional) My latest experiment" \  
  --description "(optional) Simple comparison of several hyperparameters" \  

Note: This invocation uses the exclamation prefix (!) to invoke the shell rather than the per cent prefix (%) to invoke the colab magic. When invoking this command from the command line there is no need for either prefix.

You May Also Like