Machine Learning Hello World
Machine Learning

Deep Learning “Hello World”

Spread the love

What is TensorFlow, Keras Minst, etc the conecpts related to Machine Learning hello world? In this article we will go through beginner’s Deep Learning/Machine learning hello world and guidance.

From a simple neural network to convolutional network to sequence models. There are a lot of concepts in deep learning and it is not necessary to know it all. We can use deep learning libraries in order to save ourselves from the algorithms and mathematics used in concepts like backpropagation. And if you want to make your own neural network from scratch, libraries like numpy would be enough for that too. But most of us wants to keep their life a little simpler by avoiding as much mathematics as possible in deep learning. So do I.

The first and what many call the “Hello World” of deep learning is solving MNIST. The dataset contains handwritten digits from 0 to 9. I will use Keras, a deep learning library, to build the network. You might not get all of what is described in the below and that’s not a problem. As with this my aim is to show that how with very little code we could make such an amazing model with very high accuracy.

To start, we first need to import the dataset and load the data.

from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = minst.load_data()

Here, we get the mnist dataset from Keras library and load the training and test data. The train_images and train_labels are what the model will use to learn from and we will validate the model with test_images and test_labels.

To have a look at the train dataset:

train_images.shape
(60000, 28, 28)

train_images is a numpy array which stores the data of 28*28 pixels image of handwritten digits. It contains 60000 such images.

len(train_labels)
60000

train_labels are the labels of the corresponding images.

Similarly, for test dataset:

test_images.shape
(10000, 28, 28)

test_images include 10000 images to validate the model and test it’s accuracy.

from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28*28,)))
network.add(layers.Dense(10, activation='softmax'))

layers are the building blocks of neural networks. Here, network contains two Dense layers of fully connected neural layers. The second layer, consists of a 10-way softmax layer which gives the probability that the image is one of the 10 digits.

With this layers, the model is ready. Now we need to train the model. For this we need a loss function which will measure the performance of the model on the training data and adjust the parameters(weights), an optimizer through which the model will be updated using the loss function and the metrics to monitor the model while training and testing.

network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

Preprocessing is also required before training the model. Data will be reshaped as per the model and expects and scaled to [0,1] range.

train_images = train_images.reshape((60000, 28*28))
train_images = train_images.astype('float32')/255

test_images = test_images.reshape((10000, 28*28))
test_images = test_images.astype('float32')/255

With this, train_images and test_images is reshaped from (60000, 28, 28) array to (60000, 28*28) array and the pixel values ranging from [0,255] are scaled to [0,1].

from keras.utils import to_categorical

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

With this, the labels are encoded categorically. Now the model is ready to be trained.

network.fit(train_images, train_labels, epochs=5, batch_size=128)
Epoch 1/5
469/469 [==============================] - 2s 4ms/step - loss: 0.0290 - accuracy: 0.9916
Epoch 2/5
469/469 [==============================] - 2s 4ms/step - loss: 0.0219 - accuracy: 0.9935
Epoch 3/5
469/469 [==============================] - 2s 5ms/step - loss: 0.0173 - accuracy: 0.9952
Epoch 4/5
469/469 [==============================] - 2s 5ms/step - loss: 0.0130 - accuracy: 0.9964
Epoch 5/5
469/469 [==============================] - 2s 5ms/step - loss: 0.0105 - accuracy: 0.9969

The model has been trained, and all we need now is to check the model accuracy.

test_loss, test_acc = network.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
313/313 [==============================] - 0s 538us/step - loss: 0.0770 - accuracy: 0.9803
Test accuracy: 0.9803000092506409

The test accuracy came around 98.03%

With such a short piece of code a model was able to identify the digits with a remarkable accuracy of 98.03%. The main motive of this article was to show how incredible deep learning has become. If you did not understand any of the concepts used to make this model, it’s not an issue. This was just an introduction.

There are a lot of things that I will be sharing and hope that you will wait till then. Thanks!

Note: This ideas that I am sharing are not mine but from a book called Deep Learning with Python from Francois Chollet. A great book to learn important concepts on Deep Learning and Keras. My motto is to share whatever I read and learn from sharing.

4 Replies to “Deep Learning “Hello World”

  1. Good article! We are linking to this great article on our site. Keep up the great writing. Danit Gustaf Grani

  2. Have you ever considered about adding a little bit more
    than just your articles? I mean, what you say is important and all.
    However just imagine if you added some great visuals or videos
    to give your posts more, “pop”! Your content is excellent but with pics and
    videos, this blog could definitely be one of the best in its field.

    Excellent blog!

Leave a Reply

Your email address will not be published. Required fields are marked *