How To Build a Live Twitch Notifications System using Python, Africas Talking, Postgres and Heroku


During the height of the pandemic last year (2020). We went into mandatory lockdown; now, there wasn’t much to do. I had to get creative ways to stay entertained while cooped up inside. Being a gamer without a competent PC, watching game streams was a realistic option. During this period, I started watching streams on YouTube and Facebook Gaming (I know it’s cringe) but finally settled on Twitch as my preferred platform.

Problem Statement

Most of my favourite streamers are in different countries from me and, at times, from each other. Keeping track of not the day but the time they would go online proved a challenge as I had to keep track of schedules and learn the different timezones constantly and how they converted to mine(EAT). More often than not, I would miss a stream or remember and get online after it started.

Now Twitch offers email notifications for when your favourite streamer goes online; however, I don’t regularly check my email. Most times, the email gets lost among the other tonne of non-important stuff. I needed a better way to consistently notify me when one of my favourite streams goes online and maybe some additional information. Using text messages seemed to be a good fit for my use case.

Audiences and Objectives

This article is intended first and foremost for beginner and Intermediate python developers looking to learn more about interacting with the Twitch API and Africas Talking python sdk.
However, this does not deter anyone looking to implement a quick automation project or a side project. The overall layout and flow can be replicated in your preferred stack.

This tutorial will endeavour:

  • Show you how to register for a Twitch developer account and obtain the necessary credentials to access the Twitch API.
  • Make requests using 3rd party libraries to the API and unpack the returned data to get the data we need.
  • Periodically query the Twitch API to check the status of a list of channels.
  • Store the data using an ORM to a Postgres database.
  • Alert us using the Africas Talking via text message once a channel is live and some additional info: viewer count, title and time.
  • Keep track of messages to prevent multiple similar notifications.
  • Deploy our solution to the cloud (Heroku) and configure it to run it on specified intervals.

This project will make use of:

  • Twitch API – to retrieve information if our list of favourite streamers is online.
  • Python – the language of choice to get the data, notify ourselves, and store the messages.
  • Africas Talking Sms Api to notify ourselves via a text message.
  • Postgres – Our database of choice to hold our Api data and sent messages.
  • Heroku – Hosting and scheduling of our scripts and database.


To effectively follow along with this post and the following code, you will need the following prerequisites.

  • Python and pip (I am currently using 3.9.2 ) Any version above 3.5 should work.
  • An Africas Talking account.
  • Twitch account.
  • The following instructions apply to a unix system i.e. linux/mac. They may work on Windows with a bit of tweaking :
  • Create a new directory and change it.
  mkdir twitch-update
  cd twitch-update
  • Create a new virtual environment for the project or activate the previous one.
  • Using python package manager(pip), install Africa’s Talking python sdk, python-dotenv library, requests library,
  • sqlalchemy library and psycopg2-binary library.
  • Save the installed libraries in a requirements.txt file
  python -m venv .
  source bin/activate
  pip install africastalking python-dotenv requests sqlalchemy psycopg2-binary
  pip freeze > requirements.txt

There are few alternative libraries for interfacing with the twitch API: twitch-pythonpython-twitch-client, and TwitchIO. However, for this use case adding a third party library would necessitate reading through additional documentation. It’s easier to stick to the default requests library + Twitch API workflow. However, feel free to explore further if you feel the need to.

As mentioned above, we are using PostgreSQL as our database of choice; hence we need a library to interface with the database. Psycopg2 is a good option, although there are others.

Although not necessary, we’ll be using SqlAlchemy as our Object Relation Mapper(ORM). This allows us to use Python objects (classes, functions) to make transactions on the database instead of raw SQL.

Ensure you Install the Postgresql database to simulate a backend with customer details. Depending on which platform your code is on, you could do it natively on your system. Personally, I am using docker as it is easy to manage different software as containers and prevents my system from being cluttered.

This article is an excellent resource on how to get Postgresql and pgadmin4 installed as containers. Also, you could use a database administration tool like pgadmin4 or DBeaver for a UI to interface and manage the Postgres server.

Got all that? Let’s write some code. 

Alternatively, jump to the completed code on Github

Lets Get Verified

Twitch has a very robust dev platform that allows for the development of various products and services:

  1. Extensions – Live apps that interact during the stream as a panel or with chat.
  2. Game Insights – Mostly for game developers to get analytics data on their games.
  3. Twitch API – Collection of endpoints that expose data on clips, games, streams and users. This is what we’ll be using from now on.
  4. Twitch Game solutions – Mostly for game developers, allow players to log in with Twitch in-game.

To get started with the dev platform, you need a Twitch account. Sign up for one or login with your existing account, then go to the developer console.

You should be signed in and have a screen as shown below:

Select the console button in the top right of the screen. Once in the console as seen:

Click the Applications tab. Select Register Your Application. You should now be greeted with a form seen below:

Enter a unique application name (make sure it doesn’t contain the word “twitch”). An OAuth redirect URL to return to after authentication (you can use “https://localhost” to get started for this tutorial), select an application category. In this case, I chose an analytics tool to confirm you’re not a robot. Click the Create button.

If everything went well, you should now be redirected to the main console page and see your newly created application. Now click on Manage, and you should be presented with a page to manage your specific application settings. Take note of Client-ID section and Client Secret section. We’ll need these to authenticate our script with the API. Click the New Secret button to generate a new client secret.

Back to our code: Create a .env file and add our Twitch credentials.

Enter the following replacing the placeholders with the proper credentials.

This step is entirely optional, we could hard-code our credentials in the script but it’s definitely bad security practice even in development. There is also. `env-example` file in Github repo for further reference.

We could use a variety of tools to test our credentials and check the response. However, I settled on Insomnia as it’s an open-source Postman alternative.

Enough Talk, Show me the Data!

Now that we have the required Twitch credentials, we can make authenticated requests to their API to retrieve data. Now create a new file inside our working folder called This file will hold all of our code to interact with both Twitch API and Africas Talking.


The Twitch API has a slew of endpoints from Analytics, Channels, Users, Streams, Chat and many others. For a detailed explanation of these endpoints, Check the docs. We’ll be making use of the streams endpoint. Also, note at the time of writing, Twitch was deprecating its Legacy v5 API Kraken. Hence, It’s

advisable to upgrade to the new Helix API.

Inside the file, we’ll import the necessary python packages. We’ll then attempt to make requests to the Twitch API and show the resultant data.

In the above snippet, we import the os module from the standard utility modules, requests library, an extensively used package for making HTTP requests using python. We also import the load_dotenv function from the dotenv module. This will allow us to get our credentials from our .env file safely.

When we call the load_dotenv() it checks for an environment file inside the current folder. Optionally, we can specify the path inside the brackets.
We define client_id variable to hold our client id value, follow that up with the variable for client secret value from Twitch.

We then define an endpoint variable to refer to the streams endpoint we want to make requests to. According to the twitch docs, we need to pass our credentials as headers while making requests. We create a headers variable which is a dictionary of our credentials.

Now we also need to pass along channel names we want to retrieve information about. For this, we need to specify them as parameters when making requests.
We create a param, a dictionary variable with a list of channels as the value. Twitch allows up to 100 names, allowing you to scale up depending on the number of channels you want to retrieve.
For this instance, I have included top channels that regularly stream for demonstration purposes. Feel free to include your favourite channels.

Finally, we make a request to the Twitch API to pass along all of our values as arguments. Requests have a built-in json decoder, this
is awesome as Twitch returns a json response. This removes the need for extra code to parse the response content. We then return the response in the console as shown below.

Unpack The Data

After we get a JSON response we need to parse it in order to retrieve the data we need, in this instance, it’s whether the stream is active.

The JSON response is basically a dictionary with the key of ‘data’ and the requisite information as a list of values. The code above takes the JSON response returned by Twitch and insert the values of data into an empty list and assigns the variable name streams.

We now need a way to check if from the data returned any of the streamers are online. This made easy by checking for each response if the key ‘type’ is ‘live’.
We define a lambda function is_active that takes a singular argument of stream, a list of dictionaries and checks whether type key has a value of live from data returned.

The streams_active variable applies a filter function to the output of our lambda function against JSON response. Returning only the live streams.
In order to check if we have an active stream, we use the function which returns either True or False depending on whether our dictionary keys are populated.

Let’s Store The Data

This section will deal with storing the data from the API and Messages as we send them. As mentioned earlier, we are persisting the data to a Postgres database across multiple executions of our script.

This allows us to have a reference points when notifying ourselves, this is important as we’ll deploy the script to run every intermittently throughout the day.
We could include the code in our current script, however in the interest of separating concerns: we’ll create a new file to hold all of our initial DB settings and connection.

In keeping with keeping our code clean, update your environment variables with your Postgres settings: host, the port where the DB is running, database name, database username and password.
Your .env file should resemble the one below:

The above is again optional as you could have your credentials in your file as variables. However, since we’ll be pushing to GitHub, it is not advisable.

database config

Create a new file and inside it add the requisite libraries needed to interface with the DB.

SQLAlchemy is an Object Relational Mapper that takes native Python objects converts them to SQL code and executes them against a specified database.
Using an ORM allows us to define our schema and possible relationships using python code. The above snippet imports the required field types:
Integer, String, BigInteger, DateTime.

These tell the SQLAlchemy and by extension the database what type of data will be stored in each table thus enforcing a schema.
We also import Column, Table, which will aid explicitly defining the structure of our database.
The create_engine function will create a connection to our database using specified connection strings. We then import sessionmaker class to create session objects.
Sessions allow us to make several transactions and the ORM will hold the state of the objects until we close the session or roll back the transaction.

We also import the declarative_base factory function that generates a Base class, which all mapped classes should inherit.

Among the data we’ll be storing is the date and time each stream started, we need DateTime module thus we import it. We also import the os module to access our .env file.
As we previously mentioned, we are making use of environment variables thus we import the load_dotenv function.

CLASSES for the data

The code above initializes a Base class from the declarative_base function. We then proceed to create a Stream class that will hold our stream data, inheriting the Base class.

We then define the tablename attribute and set it to stream. Going ahead to define the various columns that will be created in our stream table.
For each variable we specify as a column, SQLAlchemy will create using the given name and as well field type.

The BigInteger refers to values that are above 2 billion, for most use cases Integer would be sufficient however the stream id returned by Twitch is quite large thus causing problems when using the Integer type. By setting to primary_key it prevents duplicate entries.

field types

The other fields are pretty straightforward:

  1. user_name for the channel name which is a string.
  2. viewer_count for the current number of viewers which is a number.
  3. user_id refers to specific identification number generated by Twitch hence it’s a number.
  4. game_name refers to specific game currently being played hence string.
  5. title of the current stream thus string.
  6. started_at refers to the time the stream was created, here we use the DateTime type.
  7. message_id is the returned once a notification message is sent.

We then create a Message class to model how data relating the text notification. Here we also specify a tablename variable and set it to message.
Postgres requires each table to have a unique primary_key, hence we create an id variable, set it to primary_key and specify autoincrementvalue to true.
This ensures the primary increases consistently with each insert.

The other fields are explained below:

  1. message_id refers to id returned by Africas Talking when a message is successfully sent, set to string as its usually mixture of alphanumeric values.
  2. message is the actual message from the script to us, naturally it is a string even though it contains a combination of different types.
  3. time_created = refers to the time the notification is sent thus DateTime type.
  4. stream_id is the id of the stream in the notification.

Save the Stream!

Here we’ll create a function to save the stream data into the database.

The code above defines the add_stream function which accepts multiple arguments. A session refers to the session variable created by sessionmaker,
this will allow the group of multiple transactions to execute them together or rollback them back entirely. The rest of the arguments are defined above.

We then define the stream variable which utilizes the session to query if the stream_id argument exists in the database, by attaching the one_or_none method we get back utmost one value or None.
If the id exists it’s returned as a result, in case more than one value is found: an exception will be thrown. We return a message notifying the user the stream exists.

If the value returned is None, we create a new Stream object passing along the arguments and assign it to the stream variable.
A try/catch block is added to attempt to add the stream to the current session. incase of an error we show it in the console and rollback session.
We finally persist our session to the database by calling session.commit().

Configure Everything!

This section will Postgres database configuration with SQLAlchemy + psycopg2 as DBAPI.

We start off by defining the main function. Inside the function, we get our environment variables to connect our database.
Remember to cross-reference your .env file and make sure the names are correct. We define the engine variable that uses create_engine function,
this takes a connection string in the format: dialect+driver://username:password@host:port/database.
Here dialect refers to postgresql, driver is psycopg2, and then we pass along database configuration parameters. Read more about database URLs.

We call the metadata object from the Base class, from it we use the create_all method to create all tables defined above.
We move ahead to create a Session object from sessionmaker factory, passing the engine variable.
This allows us to create a configurable session from our connection, further use that to add items, query items and delete items from the DB.
We assign the Session object to the session variable, finally, we return the session as output to function.

Notification Gang

This section will bring together everything to send text message notifications.

We begin by first importing Africas Talking python sdk, this will give us access to different classes and methods that will allow us to easily communicate with their API with minimal boilerplate. We proceed to import our predefined functions and classes from the file. These will be used at different stages of our script execution.

Previously, we had defined at_least_one_stream_active variable which returns True if our dictionary keys are populated, else False including when
the dictionary is empty. Here our If statement checks our variable is True, then assigns an empty message list. This will hold the content of our message.

We then loop through the streams variable which is a list of dictionaries. The timestamp returned by the API is in a different format than we want to work with, this necessitates us to convert it. The datetime module provides a helpful strptime() function, we pass our current timestamp and our preferred format as arguments and get a converted timestamp.

Checks please!

After we get our correct timestamp, we append the data from each stream to the empty message list. We then proceed to call the add_stream function, we pass along a session returned from the main function and the other relevant parameters to our function.
The function will first check if the specified stream_id exists in the database, if not will add it and resultant data, if it does the entire session will be rolled back.

Inside our if block we define a stream_notification function that takes our session and message list as arguments. We execute a query using the session, Message class and live_message argument. Essentially this query stream_id from the message table against the stream id of the current message, if there is a match it returns a field else None is returned.

We assign the result to m1 variable. We then add an if statement that checks whether the result is None, we do some checks on the live_message checks and add a try/except block that attempts to send our notification message.

Messages for all

In the Try/except block, we attempt to send our message using send() function passing our message as a list and our mobile_number as arguments.
We define an empty message_id variable that we proceed to assign inside the for loop from the sent message.

For each value in our live_message argument, we join the items inside into a string and call the add_message function passing the relevant values. These include a session, message id as a string, the message sent, the current date and time, and the stream id. In case, of an error, it’s printed to console.

If the query returned a value, we print a string letting the user know the message has already been sent. We then call our function passing along our session from the main function and message list. By abstracting the logic into a function,
we make the code more portable, easier to debug and increases readability.

Deploy The Code

This section will focus on deploying our code to the cloud and setting up a scheduler to run it at set intervals.
There a lot of good cloud vendors we could use however Heroku takes the cake for its ease of deployment and free tier that will be enough for demo purposes.
It is easy enough to choose your preferred cloud provider e,g. Digital Ocean, AWS, GCP etc. Create an account or sign in on Heroku.

After creating an account and logging in, I recommend you install the heroku cli for an easier time during deploying.
Now let’s begin deployment: open your terminal and do a heroku create –app twitch-update. If you go on your app dashboard you’ll see your new app. In case you receive an error, modify your app name to be unique. Read more here.

Alternatively use the browser to create the app.

We need to create a runtime.txt file to tell Heroku which version of python we want it to run. I set mine to 3.9.2 to replicate my development environment.

  echo "python-3.9.2" > runtime.txt

We now need to initialize a git repo and push the code on Heroku:

  git init
  git branch -M main
  git add .
  git commit -am 'Deploy twitch notification script'
  git push heroku main

In case you get an error running the above commands, change your app name as its usually required to be unique. Creating the app via the CLI also adds a Heroku to the remotes in git.
Once we’re done here, let’s open up our Heroku dashboard page for your newly created Heroku application.

Your app is now on Heroku, but it is not doing anything. Since this little script can’t accept HTTP requests, going to won’t do anything. But that should not be a problem.

Setup free tier managed PostgreSQL service on Heroku

Take note that the free tier only has a limit of 10,000 rows at the time of writing this.

This step is fairly simple, simply go to the ‘Resources’ tab on your Heroku dashboard and look for ‘Heroku Postgres’, select the free tier (or whichever tier you deem fit).

To look for your database credentials, simply click on your ‘Heroku Postgres’ add-on → select ‘Settings’ → ‘View Credentials’

Finally, add your credentials to the config vars that Heroku will use during runtime. This is similar to how we’ve been storing our credentials in a .env file. You could either set them via the Heroku console in the browser or terminal using the Heroku CLI. Make sure you change the values to your actual credentials.

  # modify the values to your own
  heroku config:set at_api_key=api_key_here
  heroku config:set at_username=Username_here
  heroku config:set client_id=client_id_here
  heroku config:set client_secret=client_secret_here
  heroku config:set postgres_host=postgres_host
  heroku config:set postgres_port=postgres_port
  heroku config:set postgres_username=db_Username_here
  heroku config:set postgres_password=db_password_here
  heroku config:set postgres_db=db_name
  heroku config:set mobile_number=+2547xxxxxxxxx

Alternatively, you can add your configuration variables in the browser via: ‘Settings’ → ‘Reveal Config Vars‘. This will allow Heroku to get and set the required environment configuration for our script to run. As shown below:

configuration updates

Now let’s update our script to use the config vars. This step can be done in a separate branch in order to keep code in the main branch intact.

git checkout -b heroku_deployment

After making your changes, commit and push to Heroku.

# push your changes to heroku 
git commit -am "Updates to the environment files"
git push -f heroku heroku_deployment:main

By the end of this, if you were to visit your Heroku dashboard activity feed, you should see your application there with the latest activity indicating that your app has been deployed.
If you try to run heroku run python on your local terminal, you should see that it will attempt to run the script on your Heroku server.

If everything ran as expected. You should see output on your terminal. In the database addon we can run a simple query to check our database tables.

Heroku Scheduler

This section of the article shows you how you can run our script periodically.
Though Heroku offers several schedulers that could run your application periodically, I personally prefer ‘Heroku Scheduler’ as it has a free tier, and it is super simple to use.

To use the free tier of this add-on, Heroku requires you to add a payment method to your account. To add the scheduler: go to the ‘Resources’ tab on your Heroku dashboard and look for ‘Heroku Scheduler

  1. Configuration

Inside your newly added ‘Heroku Scheduler’ add-on, simply select ‘Add Job’ in the top right corner, and you should see the screen as shown in the picture below.

To run the python command periodically, simply select a time interval and save the job.

  1. How do I schedule a daily job?
    Simply configure our ‘Heroku Scheduler’ to run our python script every day at a specified time. In our case its every hour at 10 minutes. Then it should run our command.

You could check the logs to see if there are any errors as well check the script is running as expected.


From the start, this article was meant to offer a DIY solution to receiving text notifications relating to when a Twitch Streamer(s) start a stream.
The workflow is something similar to: Twitch API –> Python Script –> Postgres –> Africas Talking –> Us. We have achieved our objectives, further, we have deployed our script to the cloud to really automate our solution. Incorporating a database also means we can do data analysis down the line if we desire to do so.

I hope you liked this write-up and get inspired to extend it further or spark further interest in developing solutions. If you have any questions or comments. Let me know in the comments, or on Twitter.

Keep coding!


You May Also Like