Sunday, 6 March 2016

Your Own Handwriting - The Real Test

We've trained and tested the simple 3 layer neural network on the MNIST training and test data sets. That's fine and worked incredibly well - achieving 97.4% accuracy!

The code in python notebook form is at github:
That's all fine but it would feel much more real if we got the neural network to work on our own handwriting, or images we created.

The following shows six sample images I created:


The 4 and 5 are my own handwriting using different "pens". The 2 is a traditional textbook or newspaper two but blurred. The 3 is my own handwriting but with bits deliberately taken out to create gaps. The first 6 is a blurry and wobbly character, almost like a reflectioni n water. The last 6 is the previous but with a layer of random noise added.

We've created deliberately challening images for our network. Does it work?

The demonstration code to train against the MNIST data set but test against 28x28 PNG versions of these images is at:
It works! The following shows a correct result for the damaged 3.


In fact the code works for all the test images except the very noisy one. Yippeee!



Neural Networks Work Well Despite Damage - Just Like Human Brains
There is a serious point behind that broken 3. It shows that neural networks, like biological brains, can work quite well even with some damage. Biological brains work well when damaged themselves, here the damage is to the input data, which is analogous. You could do your own experiments to see how well a network performs when random trained neurons are removed.

5 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Hi,
    I want to detect the region of interest (the rectangles around the digits) using OpenCV and convert it to the image form that can be used as input for prediction. How should I do that

    E.g

    # Resize the image
    roi = cv2.resize(roi, (28, 28), interpolation=cv2.INTER_AREA)
    roi = cv2.dilate(roi, (3, 3))

    After this, how should I format this region so that it can be similar to test images in neural network, so that I can make prediction in this fashion:

    http://imgur.com/a/BGTR0

    ReplyDelete
    Replies
    1. hi - the images in the MNIST dataset are already centred so you shouldn't need to do this.

      if you are using another source for images where the digits are someone in a bigger picture then you will have to use different ideas such as the following:

      * convolution networks (to make detection location independent)
      * moving windows to scan across a larger image to see if anything matches to a high degree of confidence
      * other - searcgh google for "image recognition" and "object detection" .. here is an overview https://en.wikipedia.org/wiki/Outline_of_object_recognition

      Delete
  3. Hi, I want to train dot punctuation mark which isn't part of MNIST dataset, as I want to recognize decimal numbers. I need to train those images, for that I want to convert them to MNIST format (dealing with re-centering, normalization etc.). Can you give me suggestion how to do this?

    ReplyDelete
    Replies
    1. Hi Partho - great question!

      The hardest part will be extracting the dot from a longer string. If that is already done, then as you say, you need to centre and normalise.

      Centering is important, as is scaling.
      Centering can be done by finding the "centre of mass" along the horizontal and vertical directions. You can find this in any school maths or physics book .. sum(mass*distance)/sum(mass) where mass here is the intensity of the ink marks...

      https://en.m.wikipedia.org/wiki/Center_of_mass

      Scaling could ve done in several ways .. you might try to find a bounding box and rescale that to be in tech middle 75% of the image. Or you might want to recognise that a decimal is smaller than normal digits and keep that relative scale.

      Does that help?

      If you need libraries to help you, ooencv does this sort of image analysis.

      Delete