1.      INTRODUCTION TO TENSORFLOW ARCHITECTURE

TensorFlow library is an open source machine learning framework used to train and deploy ML models to perform deep learning functions such as image recognition and text classification.

tf.keras is a high-level API to build and train models in TensorFlow. tf.keras makes TensorFlow easier to use without sacrificing flexibility and performance. In Keras, layers  are assembled to build models. A model is a graph of layers. The most common type of model is a stack of layers: the tf.keras.Sequential model.

The installation process of TensorFlow on Windows can be done in one of two ways below;

I. Installation via pip.

a) Open Windows command prompt by typing “cmd” on the search bar or start menu and pressing enter.

b) Type python to check if python is installed on your PC. If it is installed you will get the following message indicating the version and other parameters

“Python 3.7.2rc1 (tags/v3.7.2rc1:75a402a217, Dec 11 2018, 23:05:39) [MSC v.1916 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.

If it is not installed download the latest stable python version from the python.org website for windows. Install it and follow the process above to check if it’s successfully installed.

c) Install Tensorflow by typing

Pip3 install –upgrade tensorflow

This will take some time. Be patient and wait for the installation to complete

d) Verify the installation by typing

Import tensorflow as tf

If TensorFlow is correctly installed. The command will return no result indicating no errors. If you encounter an error. Follow the process above and try reinstalling tensorflow.

II. Installation via anaconda.

a) Go to the Anaconda.com and download the Windows installation package. Click on the downloaded file and follow the prompts to complete the installation process.

b) A window will pop up with a welcome to Anaconda setup message. Click “Next”to accept the terms of agreement. Click “I Agree”.

c) You will be asked to choose the installation type for all users or just you only. Choose your preferred option and click “Next”.

d) Install it on the default directory or choose another and click “Next”.

e) A window for “Advanced Options” will pop out, check the second option “Register Anaconda as my default Python 3.”

f) Click install to start the installation process. Once the process is completed, you will get a message “Installation Complete. Setup was completed successfully”.

g) Click “Next”and then “Finish”.

h) Go to the Windows start menu and type “anaconda prompt”. Click on Anaconda Prompt to open the program.

i) Type “conda info” on the Anaconda Prompt to get information about the installed package.

j) Create a virtual environment to make an isolated location for TensorFlow by typing “conda create –n my_env”. Note “my_env” is the name of the virtual environment can be replaced by any name that you prefer. Type “y”for “yes” and press enter key on your keyboard.

k) Activate the virtual environment by typing “activate my_env”.

l) Install TensorFlow by typing “conda install tensorflow”. A list of packages will be shown including tensorflow. Type “y” and then press the enter key. Wait for the installation to complete.

m) Verify the installation by typing

Import tensorflow as tf. 

If TensorFlow is correctly installed. The command will return no result indicating no errors. If you encounter an error. Follow the process above and try reinstalling tensorflow.

2. BUILDING A SIMPLE NEURAL NETWORK WITH TENSORFLOW.

The following code illustrates how to build a simple, fully-connected network (i.e. multi-layer perceptron) using tf.keras.Sequential model.

from tensorflow.keras import layers
model = tf.keras.Sequential()

Adds a densely-connected layer with 64 units to the model:

model.add(layers.Dense(64, activation='relu'))

Add another densely-connected layer with 64 units to the model

model.add(layers.Dense(64, activation='relu'))

Add a softmax layer with 10 output units:

model.add(layers.Dense(10, activation='softmax'))

  • To set up training after the model is constructed, configure its learning process by calling the compile method:
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
  • The full script to build a simple, fully-connected network is:
model = tf.keras.Sequential([

# Add a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),

# Add another densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu'),

# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
  • The tf.keras.Model.compile takes three important arguments:
  1. optimizer: This object specifies the training procedure. Pass it optimizer instances from the tf.keras.optimizers module, such as tf.keras.optimizers.Adam or tf.keras.optimizers.SGD. If you just want to use the default parameters, you can also specify optimizers via strings, such as 'adam' or 'sgd'.
  2. loss: The function to minimize during optimization. Common choices include mean square error (mse), categorical_crossentropy, and binary_crossentropy. Loss functions are specified by name or by passing a callable object from the tf.keras.losses module.
  3. metrics: Used to monitor training. These are string names or callables from the tf.keras.metrics module.
  • Additionally, to make sure the model trains and evaluates eagerly, you can make sure to pass run_eagerly=True as a parameter to compile.
  • For small datasets, use in-memory NumPy arrays to train and evaluate a model. The model is "fit" to the training data using the fit method:
import numpy as np

# Create a random array for data
data = np.random.random((1000, 32))

# Create a random array for labels used to train the model
labels = np.random.random((1000, 10))

model.fit(data, labels, epochs=10, batch_size=32)
  • tf.keras.Model.fit takes three important arguments:
  1. epochs: Training is structured into epochs. An epoch is one iteration over the entire input data (this is done in smaller batches).
  2. batch_size: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.
  3. validation_data: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.

Here's an example using validation_data:

import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
  • Train from tf.data.datasets. Use the Datasets API to scale to large datasets or multi-device training. Pass a tf.data.Dataset instance to the fit method:
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.fit(dataset, epochs=10)
  • Datasets can also be used for validation:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32)
model.fit(dataset, epochs=10,
validation_data=val_dataset)
  • The tf.keras.Model.evaluate and tf.keras.Model.predict methods can use NumPy data and tf.data.Dataset
  • Using Numpy arrays
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.evaluate(data, labels, batch_size=32)
  • Using a Dataset
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.evaluate(dataset)
  • And here's how to predict the output of the last layer in inference for the data provided, as a NumPy array
result = model.predict(data, batch_size=32)
print(result.shape)

3. IMAGE CLASSIFICATION

· A convolutional neural network (CNN) is a neural network in which at least one layer is a convolutional layer. A typical convolutional neural network consists of some combination of the following layers: convolutional layers, pooling layers, and dense layers (Source: developers.google.com).

·  A convolutional neural network (CNN) is a neural network in which at least one layer is a convolutional layer. A typical convolutional neural network consists of some combination of the following layers: convolutional layers, pooling layers, and dense layers (Source: developers.google.com).

· Convolutional neural networks (CNN) are used mainly in image recognition (Source: developers.google.com).

· A convolutional layer is a layer of a deep neural network in which a convolutional filter passes along an input matrix(Source: developers.google.com).

· A convolutional filter is a matrix having the same rank as the input matrix, but a smaller shape. For example, given a 28x28 input matrix, the filter could be any 2D matrix smaller than 28x28 (Source: developers.google.com).

· tf.layers.conv2d is a 2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If useBiasis True, a bias vector is created and added to the outputs. If activationis not null, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument inputShape (Array of integers, does not include the sample axis), e.g. inputShape=[128, 128, 3] for 128x128 RGB pictures in dataFormat='channelsLast'. (Source: js.tensorflow.org)

· For example, consider the following:

Each convolutional operation involves a single 2x2 slice of the input matrix. For instance, suppose we use the 2x2 slice at the top-left of the input matrix. So, the convolution operation on this slice looks as follows:

The values of the top left matrix from the 5x5 input matrix are multiplied by the corresponding values in the 2x2 convolutional filter

Convolutional mathematical operation

(1 x 1) = Cell 1 column 1 (5x5 input matrix) multiplied by corresponding Cell 1 column 1 (2x2 convolutional filter)

(10 x 0) = Cell 1 column 2 (5x5 input matrix) multiplied by corresponding Cell 1 column 2 (2x2 convolutional filter)

(2 x 0)  = Cell 2 column 1 (5x5 input matrix) multiplied by corresponding Cell 2 column 1 (2x2 convolutional filter)

(3 x 1) = Cell 2 column 2 (5x5 input matrix) multiplied by corresponding Cell 2 column 2 (2x2 convolutional filter)

3.1 MNIST CLASSIFIER

· The MNIST dataset contains images of handwritten digits (0, 1, 2, etc). The Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. You can access the Fashion MNIST directly from TensorFlow (Source: Tensorflow.org).

· The Fashion MNISTdataset contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels). Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision (Source: Tensorflow.org).

3.2 Training a neural network model to classify images of clothing

In this example, we will use the tf.keras to classify images using the Fashion MNIST dataset (Source: tensorflow.org).

1. Firstly import TensorFlow and tf.keras and additional libraries Numpy and Matplotlib

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)

2. Import the Fashion MNIST dataset

fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

Loading the Fashion MNIST dataset returns four NumPy arrays:

The train_imagesand train_labels arrays are used to train the model to understand the different features/labels of the various categories of clothing.

The test_images, and test_labels arrays are used to test the model after it has been trained on the different features/labels of the various categories of clothing.

The images in the Fashion MNIST are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:

3. Map the images to the labels because the class names are excluded within the dataset, and you need to store them to use later when plotting the images

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

4. Explore the data by running the code below that will return (60000, 28, 28). The result shows that the dataset has 60000 images each represented by 28 x 28 pixels.

train_images.shape

5. Explore the length of the label by running the code that will return (60000). The result shows that the dataset has 60000 labels.

len(train_labels)

6. Each label is an integer between 0 and 9. Run the following code to see that

train_labels

7. Explore the test images by running the code below that will return (10000, 28, 28). The results shows that the dataset has 10000 images each represented by 28 x 28 pixels.

test_images.shape

8. Explore the test labels by running the code below that will return 10000. It shows that the dataset has 10000 labels.

len(test_labels)

9. Preprocess the data (to analyze the distribution of the pixels) by running the code below.

plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()

If you examine the foremost image in the training set, you will tell that the pixel values are distributed 0 to 255.

The pixel values have to be scaled or reduced to the values of 0 and 1 to be used in model training. To scale the image divide the values by 255.

train_images = train_images / 255.0
test_images = test_images / 255.0

10. Validate that the data is in the right format. To do this run the code below to display the first 25 images from the training set and display the class name under each image.

plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()

11. Train the model. To begin training the model, run the model.fit method which "fits" the model to the training data:

model.fit(train_images, train_labels, epochs=10)

As the model is trained, the loss and accuracy metrics will be displayed. In this example the model reaches an accuracy of about 0.91 (or 91%) on the training data.

12.  Evaluate the accuracy of by testing how the model performs on the test dataset

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)

The results shows that model has an accuracy of 88% which is lower than 91% on the training data. The difference between the training and testing data accuracies is an indication of ‘overfitting’. Overfitting is discussed in more detail in the additional information section of the tutorial.

13.  Use the model to make predictions on selected images by attaching a softmax layer to convert the logits to probabilities, which are easier to interpret by running the code below:

probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)

14.  Take a look at the first prediction by running the code below:

predictions[0]

The prediction is an array of 10 numbers and it represents a model’s confidence that the image corresponds to the 10 categories or labels of the clothing.

15.  You can see which label has the highest confidence value by running the code below which will return 9 indicating that the predicted image is an ankle boot:

np.argmax(predictions[0])

16.  Examine the test label to prove that the predicted image classification is accurate by running the code below which will return 9 indicating that the image is an ankle boot:

test_labels[0]

17.  Plot a graph to analyze the full set of 10 class predictions by running the code below:

def plot_image(i, predictions_array, true_label, img):
    true_label, img = true_label[i], img[i]
    plt.grid(False)
    plt.xticks([])
    plt.yticks([])
    plt.imshow(img, cmap=plt.cm.binary)
    predicted_label = np.argmax(predictions_array)
    if predicted_label == true_label:
        color = 'blue'
    else:
        color = 'red'
    plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
            100*np.max(predictions_array),
            class_names[true_label]),
            color=color)
            
def plot_value_array(i, predictions_array, true_label):
    true_label = true_label[i]
    plt.grid(False)
    plt.xticks(range(10))
    plt.yticks([])
    thisplot = plt.bar(range(10), predictions_array, color="#777777")
    plt.ylim([0, 1])
    predicted_label = np.argmax(predictions_array)
    thisplot[predicted_label].set_color('red')
    thisplot[true_label].set_color('blue')

18.  Verify the predictions the 0th image, predictions, and prediction array by running the code below:

i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i],  test_labels)
plt.show()

Correct prediction labels are blue and incorrect prediction labels are red.

19.  Verify the predictions the 12th image, predictions, and prediction array by running the code below:

i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i],  test_labels)
plt.show()

20.  Plot several images with their predictions by running the code below:

# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
    plt.subplot(num_rows, 2*num_cols, 2*i+1)
    plot_image(i, predictions[i], test_labels, test_images)
    plt.subplot(num_rows, 2*num_cols, 2*i+2)
    plot_value_array(i, predictions[i], test_labels)
    plt.tight_layout()
    plt.show()

21.  Finally, use the trained model to make a prediction about a single image by running the code below:

# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)

20.  Add the image to a batch where it's the only member since predictions are done in batches at once by running the code below which will return (1, 28, 28) indicating a single image with 28 by 28 pixels.

img = (np.expand_dims(img,0))
print(img.shape)

22.  Predict the correct label for this image by running the code below which will return an array with 10 numbers:

predictions_single = probability_model.predict(img)
print(predictions_single)

22.  Run the code to plot a graph of the predicted image which will predict a ‘pullover’

plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)

23.  Run the code below to get the prediction for the single image in the batch which will return a ‘2’ indicating that the image is a ‘pullover’.

np.argmax(predictions_single[0])

3.3 Additional information

Datasets in TensorFlow

It is important to consume data efficiently and this becomes really vital to training efficient performance models in deep learning networks. This TensorFlow Dataset tutorial is a guideline on how to use this Dataset framework to enable you to produce highly efficient input data pipelines.

The TensorFlow Dataset framework has two main components:

i. The Dataset

ii. An associated Iterator

The Dataset is essentially where the data resides. This data can be loaded in from a number of sources – existing tensors, NumPy arrays and NumPy files, the TFRecord format and direct from text files. Once you’ve loaded the data into the Dataset object, you can string together various operations to apply to the data, these include operations such as:

•    batch() – this allows you to consume the data from your TensorFlow Dataset in batches

•    map() – this allows you to transform the data using lambda statements applied to each element

•    zip() – this allows you to zip together different Dataset objects into a new Dataset, in a similar way to the Python zip function

•    filter() – this allows you to remove problematic data-points in your data-set, again based on some lambda function

•    repeat() – this operation restricts the number of times data is consumed from the Dataset before a tf.errors.OutOfRangeError error is thrown

•    shuffle() – this operation shuffles the data in the Dataset

Example Usage

1. create a dataset out of numpy ranges:

x = np.arange(0, 10)

Note: numpy.arange() in Python returns a numpy array with evenly spaced elements as per the interval.

2.   create a TensorFlow Dataset object straight from a numpy array using the method “from_tensor_slices()”

dx = tf.data.Dataset.from_tensor_slices(x)

Note: The object dataset is now a TensorFlow Dataset object.

3.   create an Iterator that will extract data from this dataset

The iterator is created using the method make_one_shot_iterator().  The iterator arising from this method can only be initialized and run once – it can’t be re-initialized.

To create a one-shot iterator run the code below

iterator = dx.make_one_shot_iterator()

To extract the next element run the code below

next_element = iterator.get_next()

4.   Setup a TensorFlow operation which can be called from the training code to extract the next element from the dataset

with tf.Session() as sess:
    for i in range(11):
    val = sess.run(next_element)
    print(val)

The result will be “0 1 2 3 4 5 6 7 8 9 then throws an OutOfRangeError”

Note: This is because the code has extracted all the data slices from the dataset and is now out of range or “empty”.

5.   To make the dataset re-initializable (extract the data repeatedly) change the make_one_shot_iterator() line to make_initializable_iterator() :

iterator = dx.make_initializable_iterator()

with tf.Session() as sess:
    sess.run(iterator.initializer)
    for i in range(15):
        val = sess.run(next_element)
        print(val)
        if i % 9 == 0 and i > 0:
            sess.run(iterator.initializer)

The result will be “0 1 2 3 4 5 6 7 8 9 0 1 2 3 4”

Note: last two lines: this if statement ensures that when we know that the iterator has run out of data (i.e. i == 9), the iterator is re-initialized by the iterator.initializer operation.

6.   Using batch()

dx = tf.data.Dataset.from_tensor_slices(x).batch(3)

with tf.Session() as sess:
    sess.run(iterator.initializer)
    for i in range(15):
        val = sess.run(next_element)
        print(val)
        if (i + 1) % (10 // 3) == 0 and i > 0:
            sess.run(iterator.initializer)

The result will be “[0 1 2] [3 4 5] [6 7 8] [0 1 2] [3 4 5] [6 7 8]”

7.   Using zip()

This method can be used when pairing up input-output training/validation pairs of data (i.e. input images and matching labels for each image)

def simple_zip_example():
    x = np.arange(0, 10)
    y = np.arange(1, 11)
    
    #create dataset objects from the arrays
    dx = tf.data.Dataset.from_tensor_slices(x)
    dy = tf.data.Dataset.from_tensor_slices(y)
    
    # zip the two datasets together
    new_dataset = tf.data.Dataset.zip((dx, dy)).batch(3)
    iterator = new_dataset.make_initializable_iterator()
    
    # extract an element
    next_element = iterator.get_next()
    with tf.Session() as sess:
        sess.run(iterator.initializer)
        for i in range(15):
            val = sess.run(next_element)
            print(val)
            if (i + 1) % (10 // 3) == 0 and i > 0:
            
            sess.run(iterator.initializer)

Output:

(array([0, 1, 2]), array([1, 2, 3]))
(array([3, 4, 5]), array([4, 5, 6]))
(array([6, 7, 8]), array([7, 8, 9]))
(array([0, 1, 2]), array([1, 2, 3]))

8.   The re-initialization if statement on the last two lines is a bit cumbersome, get rid of it by replacing the new_dataset dataset creation line with the following:

new_dataset = tf.data.Dataset.zip((dx, dy)).repeat().batch(3)

Note: When the repeat() method is applied to the dataset with no argument, it means that the dataset can be repeated indefinitely without throwing an OutOfRangeError.

Overfitting and strategies to avoid it

Overfitting is characterized by a progressive decline in a model’s accuracy during training. The model begins with a higher accuracy but then stagnates after a number of epochs. This is what is described as an ‘overfit’ of the training data.

Overfitting occurs due to the model being trained for too long therefore modelling patterns that are not generalized to the test data. In other words it is using more features/labels that are decreasing its ability to accurately predict test data.  Underfitting is when the test or training data has fewer features or labels for the models to learn underlying patterns to ensure it makes accurate predictions.

The paramount solution to avoid overfitting is to use complete training data that includes all the range of features or labels that the model is supposed to use to make predictions. The model will be able to generalize better.

Avoid using features or labels that have little significance on the predictions. This might change if additional or new information becomes available which is usually done through retraining the model with the new inputs.  An overfit model is perfect for mapping training data and their predictions but it does not have generalization power to predict unseen samples. This is usually seen when the accuracy of the model on the training data reaches an accuracy of over 98% or 100% but performs poorly in real life situations.

The next alternative solution is to use regularization which assumes that the simplest models are the best for predicting data that the model has not encountered. Regularization places limits on the weight and type of data that the model can use for prediction. If the model has limited memory, regularization forces the model to focus on the prominent patterns to make it generalize better.

One can use L1 and L2 regularization to limit the weights of the connections. L1 refers to “Least Absolute Deviations” which minimizes the sum of the absolute difference between targeted values and predicted values. L2 refers to “Least Square Error” which minimizes the square of the sum of the difference between targeted values and predicted values.

Tensorflow has functions that accept *regularizer* arguments for every created variable (as long as it accepts weights as arguments) and the results will indicate the equivalent regularization loss. The “regularizer” functions include l1_regularizer(), l1_regularizer(), and l1_l2_regularizer().

Another solution to avoid overfitting is dropout where every neuron in the network at each successive training stage has a probability ‘p’ that it will be provisionally dropped at the current training stage and picked up in a later training. In other words, the model can choose to drop features or labels during a training session and the dropped feature or label will have an equal probability of being used (active) in the next training stage. The hyperparameter ‘p’ is referred to as the dropout parameter and is customarily set to 50%. Once the training is completed the neurons will no longer be dropped.

In TensorFlow, you can use the tf.layers.dropout function to the input layer or to the output of the hidden layers. The tf.layers.dropoutfunction will drop some neurons and is divided by the probability. The tf.nn.dropout performs a similar function to tf.layers.dropout but it does not stop the dropout when training is done.

Other examples of TensorFlow Projects include:

  • Practice Project: Iris Classifier - a neural network that can classify the 3 types of Iris plants found in the Iris dataset. Example Use Case: Breast Cancer Classification.
  • Practice Project: MNIST Classifier - create a neural network that can classify the images of handwritten digits from the MNIST dataset. Example Use Case: Traffic Number Recognition.
  • Practice Project: Toxicity Classifier - which uses NLP (Natural Language Processing) to determine if a phrase is toxic in a number of categories and Mobilenet which can be used to detect content in images. Example Use Case: Sentiment Analysis.
  • Practice Project: Object Detection using webcam - build a complete web site that uses TensorFlow.js, capturing data from the web cam, and re-training mobilenet to recognize Rock, Paper and Scissors gestures. Example Use Case: Facial Recognition/Object Detection
You've successfully subscribed to Decoded For Devs
Welcome back! You've successfully signed in.
Great! You've successfully signed up.
Your link has expired
Success! Your account is fully activated, you now have access to all content.