Africa’s Talking Status Dialogue Flow Prediction Model with Keras: Part 3

Status prediction of Africa’s Talking client based SMSs API applications using Deep Learning in R.

Introduction

Keras is a high-level neural networks API developed with a focus on enabling fast experimentation.  Keras is essentially a high-level wrapper that makes the use of other machine learning frameworks more convenient. It provides a language for building neural networks as connections between general-purpose layers with Tensorflowtheano, or CNTK  used as backend. Keras has the following key features:

  • Allows the same code to run on CPU or GPU, seamlessly.
  • User-friendly API, which makes it easy to prototype deep learning models quickly.
  • Built-in support for convolutional networks (for computer vision), recurrent networks (for sequence processing), and any combination of both.
  • Supports arbitrary network architectures: multi-input or multi-output models, layer sharing, model sharing, etc. This means that Keras is appropriate for building essentially any deep learning model, from a memory network to a neural Turing machine.

Keras API can be used for both python and R. However for R, we will have to set up a python environment for us to be able to use Keras in R in developing the desired model.

In this article, you’ll learn how to use Keras to build a simple deep learning network and gain hands-on experience with deep learning. Intermediate R knowledge is required as you go through the article. Also, intermediate knowledge of the installation and integrations of Linux GUI packages is required. Let’s get started.

Setting Up Environment

To use the TensorFlow package I will use the Keras package to install the TensorFlow environment for analysis. The explained steps below are with respect to an Arch Linux system with an installation of R, version 4.1.1, and RStudio version 1.4.1106.

1. Install Anaconda

Anaconda is the data science-focused distribution of python. For a windows system, one can use Python 3. For the Linux system, the distribution is most preferred. Before installation, first, update your system to make sure all that it is up to date:

sudo pacman -Syyu

Once the above process is done process to the Anaconda guide and installs the prerequisites GUI packages necessary for Anaconda in the Arch Linux system:

sudo pacman -S libxau libxi libxss libxtst libxcursor libxcomposite libxdamage libxfixes libxrandr libxrender mesa-libgl  alsa-lib libglvnd

Proceed to the Anaconda download link for Arch Linux and download the file. If you have not changed the file download location, you should find the downloaded file in the Downloads folder of your PC. It should be noted that this is a desktop setup.

Once the Anaconda file is downloaded:

  1. Right click on the file -> Click on properties
  2. Then click on permission -> The allow executing file as a program
  3. Now drag and drop the file in the file in the terminal, press enter until you are asked:
Do you accept the licence terms? [yes/no]
[no] >>>
Please answer 'yes' or 'no':'
>>>

Answer yes then press Enter and wait for the installation to finish. Note, if the downloaded package file is not the latest, you might be asked to install the latest Anaconda. If so, type 'yes' and press Enter to continue.
The Anaconda comes with an already installed JupyterLab, Spyder and the Jupyter Notebook installed.
The above should be done in the root terminal.

2. Install Keras, TensorFlow & Reticulate packages

Keras is an open-source neural network library written in Python and capable of running on top of TensorFlow. Keras and TensorFlow are libraries programmed to help in developing and improve on Machine Learning(ML) models, that is the performance in predicting patterns between variables. It was designed to enable simple and fast prototyping and experimentation with deep neural networks and focuses on being user-friendly, modular, and extensible. Keras supports convolutional networks (ConvNet/CNN) which are deep learning algorithms that can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. It also supports recurrent networks, which is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behaviour. Keras runs seamlessly on both CPUs and GPUs, hence our ability to use it on the PC.

To download the the packages in R, we will do so and install them from github repositories using the devtools package. To do so:

The reticulate package provides a comprehensive set of tools for ease of interaction between Python and R. The package allows for calling Python from R in a variety of ways including R Markdown, translation between R and Python objects and flexible binding to different versions of Python including virtual environments and Conda environments. Reticulate embeds a Python session within your R session, enabling seamless, high-performance interoperability. To bind the python environment we will:

  1. First select the python version to use in our analysis
  2. Then set the virtual environment in which we will perform our analysis is keras

By default, reticulate uses the version of Python found on your PATH. We have used the use_python() a function that enables us to specify an alternate version, in the case one has installed multiple python versions. If not specified this returns an error. The use_virtualenv() functions enable you to specify versions of Python in the virtual environment. For a user acquainted with Conda, you can use the function use_condaenv() to specify the environment.

Thereafter, proceed to install keras:

For an installation of keras over TensorFlow in a gpu one can run:

install_keras(method = c("conda"), conda = "auto", version = "default", tensorflow = "gpu") 

Thereafter proceed and install tensorflow:

Note: Tensorflow for a gpu system requires nvidia graphics and a CUDA 10.1 and cudnn GUI packages installed in the system. For more information on the installation of packages follow the link.

Extra packages

You can now proceed and load the extra packages required in our model development and analysis.

With the environment ready we can now proceed to the Analysis.

Import & Modify Data

The data we would like to study is from Africa’s Talking data on predicting the Status of each session in accessing a service depending on the Number of Dialogue boxes(Hops) and the Duration in each session. As we saw earlier the Hops and the Duration have a very high correlation. Could the two have an implication of Status?

To import the data:

This prompts a dialogues box of the working directory where we now select Africa’s Talking data having the Status, Hops and Duration variables.
Using the fastDummies package, we will now convert the Status column into dummy variables of zeros and ones since the machine learning model will only accept numerical figures. In order to achieve this, we’ll use the library that will help us in creating the dummy variables. In the process, we’ll remove the third dummy variable(Status being equal to failed) to avoid the dummy variable trap.

Now drop the status column and store the refined data in the final variable as above.

Deep Learning with Keras Package

Partition Data

Using the Caret library we will create a split index for our classification model. We’ll then use the index to split the final data into a training and testing set.

One Hot Encoding

One Hot encoding is basically redefining the dependent variable as a set of dummy variables. This is done by the use of the We’ll then use the to_categorical function on the labels to ensure that the dependent variable is categorical. This is done on both the Status_success variable in the test and train files. For the remaining independent variables in the data, we will prepare them for Keras by scaling it. We’ll use the scale function to achieve this.

Create Model

In defining our model, we have used the keras_model_sequential() to initialize the model. We’ll then define dense layers using the popular relu activation function. We will drop-out layers to fight overfitting in our model. We then add the output layer with the sigmoid activation function.

Thereafter, we compile the model using the categorical_crossentropy loss function as we are solving a classification problem. We’ll use the adam optimizer for gradient descent and use it for the metrics. We then fit our model to the training and testing set. Our model will run on 200 epochsusing a batch size of 32 and a 20% validation split.

When we fit the data, we get the following results: from the tensorboard

Result

We note that:

  • The accuracy improves gradually rise and improves further past the 7 replications . This seems to start to taper at about 176.
  • We also note a loss for the validation split (11%). The training data and the validation split run in parallel which is a good sign. If the validation loss happens to increase, that means that we are overfitting the model.
  • For the lower graph, we note that after about 10 iterations, the loss and accuracy stabilizers.

Evaluate Model with Test Data

To evaluate the model with the test variable:

Output:

2/2 [==============================] - 0s 1ms/step - loss: 0.1175 - accuracy: 0.9756
     loss  accuracy 
0.1174604 0.9756098

The model we have created is 97% accurate in predicting the Status of test sessions. The Hops and Duration have therefore an effect on how successful or daunted a client is in getting a service or not.
These are but two variables that affect the success of accessing a service. One can try the predictability of the Status using other variables with time.

0 Shares:
You May Also Like