The code in python notebook form is at github:
The following shows six sample images I created:
The 4 and 5 are my own handwriting using different "pens". The 2 is a traditional textbook or newspaper two but blurred. The 3 is my own handwriting but with bits deliberately taken out to create gaps. The first 6 is a blurry and wobbly character, almost like a reflectioni n water. The last 6 is the previous but with a layer of random noise added.
We've created deliberately challening images for our network. Does it work?
The demonstration code to train against the MNIST data set but test against 28x28 PNG versions of these images is at:
In fact the code works for all the test images except the very noisy one. Yippeee!
Neural Networks Work Well Despite Damage - Just Like Human Brains
There is a serious point behind that broken 3. It shows that neural networks, like biological brains, can work quite well even with some damage. Biological brains work well when damaged themselves, here the damage is to the input data, which is analogous. You could do your own experiments to see how well a network performs when random trained neurons are removed.