AI Building Blocks: How TanH Works

TanH functions can be used as activation functions in neural nets. This post explains what the TanH function is, how it works, and what it looks like.

The TanH function I’ll be discussing is a mathematical equation, shown here:

The TanH function squashes an input, z, between two numbers — -1 and 1 in this example. As z decreases, output approaches -1 and as z increases, output approaches 1. Here’s the TanH function in Python code:

This example uses the numpy exp() function for e in the diagram, standing for Euler’s number, which is an irrational number approximately equal to 2.718281.

The following code tests the tanh() function, exporting it’s output to a *.csv file:

This code passes integers -9 through 9to tanh() and writes results to the tanh.csv file. The purpose of the tanh.csv file is to visualize the TanH function, shown in the following figure:

Though the image shows the line nearly horizontal on the ends, -1 and 1 are limits. That is, as z decreases or increases without bound, the line approaches, but is never equal to -1 or 1, respectively. You can see this behavior from the numbers in column A and can test it by increasing the range of numbers in the test code. The following steps produce the chart in the above image:

Now you can see something similar to the figure above. Additionally, you can double click and change the title, move the chart around, and customize several properties.

Now you know what the TanH function is, there was Python code showing how it works, and you could visualize it in an Excel chart.

As an activation function, TanH is often preferable to Sigmoid. In addition to TanH, a more recent activation function, ReLU, has gained increased popularity.

More from Joe Mayo

Author, Instructor, & Independent Consultant. Author of C# Cookbook: — http://bit.ly/CSharpCookbook — @OReillyMedia #ai #csharp #linq2twitter #twitterapi

Love podcasts or audiobooks? Learn on the go with our new app.