Skip to main content

Postgres to Mongo Migrator - Batteries Included!!!

DATABASE MIGRATION ACROSS PLATFORMS - Got your goosebumps yet?

     Well, long story short; cross-platform database migrations equals sleep talking, distress and long day works with coffee; and what good does it do? We will just end up writing hours and hours of scripts to conquer the end-result. However, It is of one-time-use-only, which lets you think to yourself; "All this horsepower and no room to gallop?".

Postgres to MongoDB:

    Be it a platform change, or maybe, it's due to the organizational growth or perhaps bad coding, or perhaps you have got your own microservices all set in, dwelling on the JSON objects; you might have had to switch from relational to noSQL databases. Switching can be tedious, I hear you and here lies the solution to all your worries.

Behold! Enter the Pg2Mongo:


 Pg2Mongo is an open source migration tool, written on pythonV3 which gives you an exclusive control over the migrations.

First Steps:

The initial step is to make sure you have access to both the Postgres and MongoDB servers. Upon cloning the repository, make sure you install the requirements for the pg2mongo to run.


For demonstration-sake, let's try to migrate the dataset provided along with the pg2mongo for us to play-around.

Configuration setup:

And now, all we got to do is to set up the instructions for the migrator to wrangle. The configuration file is at the location - 'pg2mongo/pg2mongo.yml' and it goes as follows:

The preliminary sections such as extraction and commit are self expanatory, stating the configuration settings for the extraction and commit databases. The component Migration is where all the magic happens!

The following section explains what the individual components are all about:

INIT_TABLE:

Inital table from which data needs to be migrated. This could be a prime table such as a transactions table with a primary key having multiple foreign constraints to other tables of the postgreSQL database. FOR EACH ENTRY IN THIS TABLE, THE LINKING OF OTHER TABLES WILL HAPPEN WHILE DEFINING THE TABLES.

INIT_KEYS:

KEYS of the init_table (aliases can be given using 'as')

SKELETON:

Skeleton is an empty raw python dictionary assignment which will transform to a mongodb document, upon migration

TABLES_ORDER:
 
The order by which the TABLES section needs to be executed for each of the entry from INIT_TABLE

TABLES:

Set of PostgreSQL tables enlisted along with condition and corresponding mapping. In the case of lists inside a dictionary, list can be mentioned. Mapping is where, the association of skeleton to the table keys is defined. The value assignments are python compatible; hence, they are defined by using '%s' and other python based variable transformation functions can be used over here.

COLLECTIONS:

This is where the push of the skeleton to the corresponding MongoDB collection takes place.
With all the instructions in place, it's time to wrangle. You may invoke the migration by keying in the following command.
And off she goes!!


Comments

Popular posts from this blog

ES Index - S3 Snapshot & Restoration:

The question is.. What brings you here? Fed up with all the searches on how to back-up and restore specific indices? 

Fear not, for your search quest ends here.!

After going through a dozens of tiny gists and manual pages, here it is.. We've done all the heavy-lifting for you.



The following tutorial was tested on elasticsearch V5.4.0

And before we proceed, remember:

Do's:

Make sure that the elasticsearch version of the backed-up cluster/node <= Restoring Cluster's version.

Dont's:

Unless it's highly necessary;

curl -XDELETE 'http://localhost:9200/nameOfTheIndex

      - deletes a specific index

Especially not, when you are drunk!:

curl -XDELETE 'http://localhost:9200/_all

      - deletes all indexes (This is where the drunk part comes in..!!)



Step1:Install S3 plugin Support:        sudo bin/elasticsearch-plugin install repository-s3
                                  (or)
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install repository-s3

Depends on w…

Flyway - Database Migrations made easy & How not to accidentally Roleback all of your migrations

Flyway - by boxfuse: Is a schema migration tool and it acts more of like a version control for your relational databases.

If you are manually executing your sql scripts or if your administrator is manually executing the sql scripts, on your production or UAT environment, you definitely need this tool to be setup in all of your environments.

Before we proceed:

Statutory Warning: 

Never ever execute the following command, be it your production or UAT environment:

$ flyway clean   # Do not execute this, ever!!!!

Wondering what it does? It roles back whatever table migrations/changes you have done through flyway, along with their data. 

In short, Don't ever execute this command.

Now that we are done with all the warnings:


Installation:It is fairly straight forward:
Run the above command in a shell prompt.
Running the above creates a directory called as flyway-x.x.x/
Inside this directory are many other directories of which, the two most import directories are:
 conf/ - Configuration for eac…

ELK Stack... Not!!! FEK, it is.!!! Fluentd, Elasticsearch & Kibana

If you are here, you probably know what elasticsearch is and at some point, trying to get into the mix. You were searching for the keywords "logging and elasticsearch" or perhaps, "ELK"; and probably ended up here. Well, you might have to take the following section with a pinch of salt, especially the "ELK Stack"  fam.
At least from my experience, working for start-ups teaches oneself, a lot of lessons and one of the vast challenges include minimizing the resource utilization bottlenecks. On one hand, the logging and real-time application tracking is mandatory; while on the the other hand, there's a bottle neck in the allocated system resource, which is probably an amazon EC2 instance with 4Gigs of RAM.
ELK Stack 101: Diving in, ELK => Elasticsearch, Logstash and Kibana. Hmm, That doesn't add up; don't you think? Elasticsearch stores the reformed log inputs, Logstash chops up the textual logs and transforms them to facilitate query, deriva…