Skip to main content

Elasticsearch to MongoDB Migration - MongoES

The following are some of the instances where the developers simply love to hate!
  • The one-last-thing syndrome - This reminds me of the following quote:
  The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
Tom Cargill, Bell Labs, from the book `Programming Pearls `
  • QAs declaring certain undocumented features to be as bugs - Seriously, this create traumas for a devloper.
  • Interruptions during coding - Here's an idea. Try talking to developers while they code; chances are, they have just about <10% of your attention. 
There are some problems which we get used to..

But, there are others which makes us wanna do this..


Talking about ES to MongoDB Migration 

- How hard could that be?

Good Side:
JSON objects are common for both.
Numerous tools to choose from, for migration.
Bad Side: 
The Migration can be hideous, and can eat up a lot of the system resources. Be ready for a system-freeze, in case the migration tool uses a queue.
Ugly Side:
Can never be resumed from the point of failure. If the connectivity goes down during the migration; the transferred collection has to be deleted and the data transfer has to be initiated once again from the beginning.

Alright, there's nothing there to be felt bad about.

Enter, MongoES.

MongoES is a pure python3-bred Migration tool to migrate documents from the elasticsearch's index to the MongoDB collections.

It's robust in it's native way; no queues/message brokers are involved; which means that there won't be any memory spikes or system freezes.

This became achievable due to the fact that MongoES specifically uses a tagging strategy prior to the migration. The tagging happens in the source elasticsearch, which stands as a checkpoint during the migration.

Why a new custom id tagging, while there's an _id already?

Unless the documents are explicitly tagged, the _id fields in elasticsearch documents are a bunch of alphanumeric strings generated to serialize the documents. These _id columns become unusable, since queries/aggregations can not be run using them.

MongoES - How to:
  1. Install all the Prerequisites.
  2. Clone the repository from
  3. Edit the mongoes.json file according to your requirements.

  4. Make sure that both the elasticsearch and mongoDB services are up and running, and fire up the migration by keying in:

  5. Sit back and relax; for we got you covered! The migration's default value is 1000 documents per transfer.
Happy Wrangling!!! :)


Popular posts from this blog

ES Index - S3 Snapshot & Restoration:

The question is.. What brings you here? Fed up with all the searches on how to back-up and restore specific indices? 

Fear not, for your search quest ends here.!

After going through a dozens of tiny gists and manual pages, here it is.. We've done all the heavy-lifting for you.

The following tutorial was tested on elasticsearch V5.4.0

And before we proceed, remember:


Make sure that the elasticsearch version of the backed-up cluster/node <= Restoring Cluster's version.


Unless it's highly necessary;

curl -XDELETE 'http://localhost:9200/nameOfTheIndex

      - deletes a specific index

Especially not, when you are drunk!:

curl -XDELETE 'http://localhost:9200/_all

      - deletes all indexes (This is where the drunk part comes in..!!)

Step1:Install S3 plugin Support:        sudo bin/elasticsearch-plugin install repository-s3
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install repository-s3

Depends on w…

The No-BS guide to AutoComplete and FuzzySearch in Elasticsearch

Before we begin.. Here are a few basics.Analyzer: An analyzer does the analysis or splits the indexed phrase/word into tokens/terms upon which the search is performed with much ease.

An analyzer is made up of tokenizer and filters.

There are numerous analyzers in elasticsearch, by default;
here, we use some of the custom analyzers tweaked in order to meet our requirements.
Filter: A filter removes/filters keywords from the query. Useful when we need to remove false positives from the search results based on the inputs.

We will be using a stop word filter to remove the specified keywords in the search configuration from the query text.
Tokenizer: The input string needs to be split, in order  to be searched against the indexed documents. We are about to use ngram here, which splits the query text into sizeable terms.
Mappings: The created analyzer need to be mapped to a fieldname, for it to be efficiently used while querying.
T'is time!!! Now that we have covered the basics, t'is t…

ELK Stack... Not!!! FEK, it is.!!! Fluentd, Elasticsearch & Kibana

If you are here, you probably know what elasticsearch is and at some point, trying to get into the mix. You were searching for the keywords "logging and elasticsearch" or perhaps, "ELK"; and probably ended up here. Well, you might have to take the following section with a pinch of salt, especially the "ELK Stack"  fam.
At least from my experience, working for start-ups teaches oneself, a lot of lessons and one of the vast challenges include minimizing the resource utilization bottlenecks. On one hand, the logging and real-time application tracking is mandatory; while on the the other hand, there's a bottle neck in the allocated system resource, which is probably an amazon EC2 instance with 4Gigs of RAM.
ELK Stack 101: Diving in, ELK => Elasticsearch, Logstash and Kibana. Hmm, That doesn't add up; don't you think? Elasticsearch stores the reformed log inputs, Logstash chops up the textual logs and transforms them to facilitate query, deriva…