Tensorflow - step by step

Page content

Intro - Official quickstart for beginners

https://www.tensorflow.org/tutorials/quickstart/beginner

Import TensorFlow library and load official MNIST dataset.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

Split MNIST dataset into training and dataset. and regularize (from 0 to 1).

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

The meaning of values is quoted below.

https://conx.readthedocs.io/en/latest/MNIST.html

The MNIST digits are grayscale images, with each pixel represented as a single intensity value in the range 0 (black) to 1 (white). You can think of the whole image as consisting of 784 numbers arranged in a plane of 28 rows and 28 columns.

Build a model for MNIST.

In the official tutorial,

  1. The imported data are in form 28*28 tensor.
  2. Reshape the data in 1D array (28*28=784 elemets).
  3. Add a dence layer which consist of 128 RELU nodes.
  4. Dropout regularize on the output.

The Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged.

Here is the code.

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

Test the default model (just a test.)

We will feed an output to softmax layer in order to interpretate a result as possibilities. Dropout contains random drop procedure, therefore you could get a different result written below.

predictions = model(x_train[:1]).numpy()
tf.nn.softmax(predictions).numpy()

#array([[0.11723053, 0.11712567, 0.12127279, 0.06410404, 0.0771331 ,
#        0.06719016, 0.08481569, 0.16504557, 0.08406606, 0.10201627]],
#      dtype=float32)

Before training, define your own loss function. In the tutorial, the function is cross entropy (from library).

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

Train the model

Finally, train it!

model.compile(optimizer='adam',
              loss=loss_fn,
              metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)

After training, check an accuracy of the traind model.

model.evaluate(x_test,  y_test, verbose=2)
#313/313 - 0s - loss: 0.0709 - accuracy: 0.9782

Combine the model and a softmax layer.

probability_model = tf.keras.Sequential([
  model,
  tf.keras.layers.Softmax()
])

Use (infer) the model

Test it!

probability_model(x_test[0])
<tf.Tensor: shape=(1, 10), dtype=float32, numpy=
array([[1.45909428e-06, 2.97187466e-08, 4.26509105e-05, 2.17592157e-03,
        1.05295390e-11, 7.68994198e-07, 1.96497094e-12, 9.97763157e-01,
        2.30050478e-06, 1.36131202e-05]], dtype=float32)>

The 8th element (character 7) has the highest probabiliry (99.776%).