First steps with TensorFlow – Part 2

Part 2 – Logistic Regression

First steps with TensorFlow – Part 2

If you have had some exposure to classical statistical modelling and wonder what neural networks are about, then multinomial logistic regression is the perfect starting point: It is a well-known statistical classification method and can, without any modifications, be interpreted as a neural network.

Logistic regression recap

The task of logistic regression is to predict a categorical variable from a set of continuous predictor variables. The idea is to use "something like" linear regression. Logistic regression takes several steps to transform the task into "something like" a regression problem. It turns out that the steps are very similar to what you do when you build a neural network.

So, assume we want to predict one of \(n\) categories from \(m\) continuous input variables.

Step 1: Make the predicted variable continuous

Instead of predicting one of the \(n\) categories we try to predict an \(m\)-dimensional vector \(p = (p_1, …, p_m)^\top\) which contains the probabilities of each of the categories for a given set of inputs. Now, at least technically, we can predict the outcome by a linear model

\(p \stackrel{?}{=} w \cdot x + b\)

where \(x\) is the \(m\)-dimensional vector of input variables, \(w\) is an \(n \times m\) matrix consisting of the row vectors \(w_i\), and \(b\) is a vector of size \(n\). If we were talking about a neural network, \(w\) would be called the weight matrix and the elements of \(b\) would be called the biases of the model.

N.B.: All vectors are assumed to be column vectors.

Step 2: Make the predicted values positive

In order to make \(w \cdot x + b\) a sensible predictor of \(p\) we first want to make sure that all predicted probabilities are positive. To that end we transform each row of the model output using the exponential function

\(p_i \stackrel{?}{=} e^{w_i \cdot x + b_i}\)

Step 3: Make sure that the predicted values sum up to one

And, of course, we want the probabilities to sum up to one. Therefore we normalize the predictions from the previous, tentative, equation and get

\(p_i = \frac{e^{w_i \cdot x + b_i}}{\sum\limits_{j} e^{w_j \cdot x + b_j}}\)

This is the model for multinomial logistic regression. In the context of neural networks it is common to rewrite this expression in terms of the softmax function, which is defined by

\(softmax(z)_i = \frac{e^{z_i}}{\sum\limits_{j} e^{z_j}}\)

resulting in

\(p = softmax(w x + b)\)

Step 4: Estimate the parameters

Assume that our training data consist of a set of input vectors \(x^{(k)}\) with associated labels (true categories) \(y^{(k)}\). The labels are assumed to be one-hot encoded, i.e. each \(y^{(k)}\) is a vector of length \(n\) which has the value \(1\) at the position of the true category and the value \(0\) otherwise.

As usual we will determine the model parameters \(w_{i, j}\) and \(b_j\) using maximum likelihood estimation. More specifically we will try to maximize the (log-)likelihood that the model, which is specified by the parameters \(w\) and \(b\), predicts for the true categories \(y^{(k)}\) given the values \(x^{(k)}\) of the predictor variables. Assume that the true category for the \(k\)-th sample from the training set is \(i\). Then the log likelihood of the \(k\)-th sample is

\( \log \mathcal{L}(w, b \mid x^{(k)}, y^{(k)}) = \log p(x^{(k)})_i\)

which can be re-written as

\( \log \mathcal{L}(w, b \mid x^{(k)}, y^{(k)}) = \sum\limits_{j} y^{(k)}_j \cdot \log p(x^{(k)})_j \)

(Recall that \(y^{(k)}\) is one-hot encoded, i.e. that \(y^{(k)}_i=1\) and that \(y^{(k)}_j=0\) for \(j \neq i\).)

In the context of neural networks it is common to rewrite this expression in terms of the categorical crossentropy function

\( H(y, p) = -\sum\limits_{i} y_i \cdot \log p_i \)

resulting in

\( \log \mathcal{L}(w, b \mid x^{(k)}, y^{(k)}) = -H(y^{(k)}, p(x^{(k)})) =-H(y^{(k)}, \mathit{softmax}(w x^{(k)} + b))\)

This is the definition of a perfectly normal neural network with an input layer, no hidden layers, an output layer that uses the softmax activation function and a categorical cross-entropy loss function.

In order to obtain our estimates of \(w\) and \(b\) we maximize the log likelihood of all training samples combined, i.e. we determine

\( \underset{w, b}{arg max} \sum\limits_{k} -H(y^{(k)}, \mathit{softmax}(w x^{(k)} + b))\)

or, equivalently,

\( \underset{w, b}{arg min} : \underset{k}{avg} \, H(y^{(k)}, \mathit{softmax}(w x^{(k)} + b))\)

In neural networks terms estimating \(w\) and \(b\) from all training samples at once corresponds to batch gradient descent (as opposed to stochastic gradient descent and mini-batch gradient descent).

We will show below how batch gradient descent in TensorFlow can be used to estimate a logistic regression model.

N.B.: Specialized logistic regression packages will use algorithms like L-BFGS rather than gradient descent to estimate the model parameters. We use gradient descent because we are mainly interested in the analogy to neural networks.

N.B.: Due to the fact that the probabilities are constrained to sum up to \(1\), one of the output dimensions is redundant. Therefore most regression packages reduce the number of dimensions of \(w\) and \(b\) by \(1\) by defining one of the output probabilities as a "pivot" relative to which the others are expressed. We skip that step because we are more interested in preserving the symmetry of the formulation and the analogy to neural networks.

Example: Iris data

We implement the steps outlined above using the classical iris data set. The task is to predict one of three iris species from four measurements on the iris flower, i.e. length and width of sepals and petals.

import tensorflow as tf
import numpy as np
import pandas as pd
import sklearn as skl
#

We first download a csv file with the iris data into a pandas data frame

data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", sep=",",
                   names=["sepal_length", "sepal_width", "petal_length", "petal_width", "iris_class"])
#

Because the data are ordered by iris species we shuffle it

np.random.seed(0)
data = data.sample(frac=1).reset_index(drop=True)
#

The downloaded data set now looks like this

data.head()
#
sepal_lengthsepal_widthpetal_lengthpetal_widthiris_class
05.82.85.12.4Iris-virginica
16.02.24.01.0Iris-versicolor
25.54.21.40.2Iris-setosa
37.32.96.31.8Iris-virginica
45.03.41.50.2Iris-setosa

Then we extract the predictor variables

all_x = data[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']]
#
all_x.head()
#
sepal_lengthsepal_widthpetal_lengthpetal_width
05.82.85.12.4
16.02.24.01.0
25.54.21.40.2
37.32.96.31.8
45.03.41.50.2

and the target variable, which is one-hot encoded using the pandas function pd.get_dummies()

all_y = pd.get_dummies(data.iris_class)
#
all_y.head()
#
Iris-setosaIris-versicolorIris-virginica
0001
1010
2100
3001
4100

We remember the number of input and output dimensions

n_x = len(all_x.columns)
n_y = len(all_y.columns)
#

and split the data into 2/3 training and 1/3 test using the utility function model_selection.train_test_split() from scikit learn

train_x, test_x, train_y, test_y = skl.model_selection.train_test_split(all_x, all_y, test_size=1 / 3)
#
print(train_x.shape)
print(train_y.shape)
print(test_x.shape)
print(test_y.shape)
#
(100, 4)
(100, 3)
(50, 4)
(50, 3)
#

We are now ready to build our TensorFlow graph. We work with the default graph and the default (=interactive) session because it is more convenient in a notebook.

sess = tf.InteractiveSession()
#

We define the input for our predictor variable x as a matrix with n_x = 4 columns, and the input for our one-hot encoded target variable as a matrix with n_y = 3 columns

x = tf.placeholder(tf.float32, shape=[None, n_x])
y = tf.placeholder(tf.float32, shape=[None, n_y])
#

The parameters to be trained are a 3 * 4 weights-matrix w and a 3-dimensonal vector b of biases

W = tf.Variable(tf.zeros(shape=[n_x, n_y]))
b = tf.Variable(tf.zeros(shape=[n_y]))   
#

We define the predictor predictor function \(p = softmax(w x + b)\) using the TensorFlow builtins for matrix multiplication and softmax

prediction = tf.nn.softmax(tf.matmul(x, W) + b)
#

The function call tf.matmul(x, W) corresponds to the mathematical expression \(w x\), with a few qualifications:

  • While the variable \(x\) was a single vector, the TensorFlow variable x is a matrix containing multiple input vectors.
  • While the variable \(x\) was a column vector, the TensorFlow placeholder x contains input vectors as rows. This order of dimensions follows the machine learning convention.
  • Correspondingly the order of dimensions in the TensorFlow variable W is swapped compared to the variable \(w\),
  • and the order of the parameters in the function call tf.matmul(x, W) is swapped compared to the corresponding expression \(w x\).

Also note that the TensorFlow operations + b and tf.nn.softmax() are written in matrix notation, which means that + b and tf.nn.softmax() are applied to each row of their respective input matrices separately.

We have now defined the prediction as a matrix with 3 columns corresponding to the three different iris species

prediction
#
tf.Tensor 'Softmax:0' shape=(?, 3) dtype=float32
#

Defining our cost function using the cross-entropy loss \( \underset{k}{avg} \, H(y^{(k)}, p^{(k)}) = -\underset{k}{avg} \, \sum\limits_{i} y_i^{(k)} \cdot \log p_i^{(k)} \) is slightly more involved that defining the prediction, but can still be written in one line

cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(prediction), axis=1))
#

We will look at this expression step-by-step.

The function call tf.log(prediction) simply calculates the logarithm for each cell of the prediction matrix.

The matrix of true labels y has the same dimensions as tf.log(prediction) and the multiplication y * tf.log(prediction) is performed element-wise

y * tf.log(prediction)
#
tf.Tensor 'mul_1:0' shape=(?, 3) dtype=float32
#

The expression tf.reduce_sum() sums the elements of y * tf.log(prediction) along axis 1 (the axis count starts at 1), i.e. it calculates the sum over all columns for each row, resulting in a column vector containing a single value for the cross-entropy of each sample

tf.reduce_sum(y * tf.log(prediction), axis=1)
#
tf.Tensor 'Sum_1:0' shape=(?,) dtype=float32
#

Conceptually this expression just calculates the log of the predicted probability of the true category of each training sample.

The last step in the definition of the cost function is a simple average over the negation of this vector using tf.reduce_mean(...)

cost
#
tf.Tensor 'Mean:0' shap