Skip to main content

Model Productionisation - MNIST Handwritten Digits Prediction


Yet another post about MNIST Handwritten Digits Prediction?

Nope. Not this time!!

There are about a hundred of tutorials available on-line for this cause.
Here's a quickie, to understand all the mechanics of the prediction process in tensorflow for the MNIST datasets which should get you, up and running.

Done with the basics? Head over to

https://github.com/datawrangl3r/mnistProduction

and clone the project.

We are about to deploy an image prediction RESTful API, powered by Flask microframework.

The code in the repo is written in pythonV2.7, you may also use python3; should be a breeze.


Step 2, mentioned above powers up the API, serving the end-point through the port 5000. Time to test-query our API.

The project directory contains the numerals_sample, from which one may crop the required digits out. As for this demo, we shall look at numba3.jpg, numba5.jpg, numba6.jpg, numba7.jpg and numba9.jpg present in the same directory, as that of the project.

Fire up the browser, and hit the following URL to test our model with a numba6.jpg:

http://localhost:5000/predictint?imageName=numba6.jpg



BAM..!!! I got a number 6!!



That was too easy.. How about - numba7.jpg



http://localhost:5000/predictint?imageName=numba7.jpg

BooM..!!!  7, It is...




How about, a numba9.jpg?



http://localhost:5000/predictint?imageName=numba9.jpg

I've got a 5 ??????????


Well, hate to admit.. There just can't be a 100% perfect model. Neither is our test datasets.

As a matter of fact; Five, does look a little bit like 9..







    Which drives us to the fact that, the model can be improved when  more and more training datasets are provided, which substantially increases the accuracy.

Key in your comments below, if you found this article to be helpful or just to give a shout-out!!!

Comments

Popular posts from this blog

ES Index - S3 Snapshot & Restoration:

The question is.. What brings you here? Fed up with all the searches on how to back-up and restore specific indices? 

Fear not, for your search quest ends here.!

After going through a dozens of tiny gists and manual pages, here it is.. We've done all the heavy-lifting for you.



The following tutorial was tested on elasticsearch V5.4.0

And before we proceed, remember:

Do's:

Make sure that the elasticsearch version of the backed-up cluster/node <= Restoring Cluster's version.

Dont's:

Unless it's highly necessary;

curl -XDELETE 'http://localhost:9200/nameOfTheIndex

      - deletes a specific index

Especially not, when you are drunk!:

curl -XDELETE 'http://localhost:9200/_all

      - deletes all indexes (This is where the drunk part comes in..!!)



Step1:Install S3 plugin Support:        sudo bin/elasticsearch-plugin install repository-s3
                                  (or)
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install repository-s3

Depends on w…

The No-BS guide to AutoComplete and FuzzySearch in Elasticsearch

Before we begin.. Here are a few basics.Analyzer: An analyzer does the analysis or splits the indexed phrase/word into tokens/terms upon which the search is performed with much ease.

An analyzer is made up of tokenizer and filters.

There are numerous analyzers in elasticsearch, by default;
here, we use some of the custom analyzers tweaked in order to meet our requirements.
Filter: A filter removes/filters keywords from the query. Useful when we need to remove false positives from the search results based on the inputs.

We will be using a stop word filter to remove the specified keywords in the search configuration from the query text.
Tokenizer: The input string needs to be split, in order  to be searched against the indexed documents. We are about to use ngram here, which splits the query text into sizeable terms.
Mappings: The created analyzer need to be mapped to a fieldname, for it to be efficiently used while querying.
T'is time!!! Now that we have covered the basics, t'is t…

ELK Stack... Not!!! FEK, it is.!!! Fluentd, Elasticsearch & Kibana

If you are here, you probably know what elasticsearch is and at some point, trying to get into the mix. You were searching for the keywords "logging and elasticsearch" or perhaps, "ELK"; and probably ended up here. Well, you might have to take the following section with a pinch of salt, especially the "ELK Stack"  fam.
At least from my experience, working for start-ups teaches oneself, a lot of lessons and one of the vast challenges include minimizing the resource utilization bottlenecks. On one hand, the logging and real-time application tracking is mandatory; while on the the other hand, there's a bottle neck in the allocated system resource, which is probably an amazon EC2 instance with 4Gigs of RAM.
ELK Stack 101: Diving in, ELK => Elasticsearch, Logstash and Kibana. Hmm, That doesn't add up; don't you think? Elasticsearch stores the reformed log inputs, Logstash chops up the textual logs and transforms them to facilitate query, deriva…