Skip to main content

ES Index - S3 Snapshot & Restoration:

The question is.. What brings you here? Fed up with all the searches on how to back-up and restore specific indices? 

Fear not, for your search quest ends here.!

After going through a dozens of tiny gists and manual pages, here it is.. We've done all the heavy-lifting for you.

The following tutorial was tested on elasticsearch V5.4.0

And before we proceed, remember:


Make sure that the elasticsearch version of the backed-up cluster/node <= Restoring Cluster's version.


Unless it's highly necessary;

        curl -XDELETE 'http://localhost:9200/nameOfTheIndex

              - deletes a specific index

Especially not, when you are drunk!:

        curl -XDELETE 'http://localhost:9200/_all

              - deletes all indexes (This is where the drunk part comes in..!!)

Step1: Install S3 plugin Support:

        sudo bin/elasticsearch-plugin install repository-s3
        sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install repository-s3

Depends on where your elasticsearch-plugin executable is installed. This enables the elasticsearch instance to communicate with the AWS S3 buckets.

Step2: Input the Snapshot registration settings:


URL: http://localhost:9200/_snapshot/logs_backup?verify=false&pretty

                  "type": "s3",
                  "settings": {
                    "bucket": "WWWWWW",
                    "region": "us-east-1",
                    "access_key": "XXXXXX",
                    "secret_key": "YYYYYY"

In the URL:
       - logs_backup : Name of the snapshot file

In the payload JSON:
        - bucket : "WWWWW" is where you enter the name of the bucket.
        - access_key & secret_key : The values "XXXXXX" and "YYYYYY" is where we key in the access key and secret key for the buckets based on the IAM policies - If you need any help to find it, here's a link which should guide you through (
        - region : region where the bucket is hosted (choose any from:

This should give a response as, '{"acknowledged": "true"}'.

Step3: Cloud-Sync - list all Snapshots:


URL: http://localhost:9200/_cat/snapshots/logs_backup?v

In the URL:
       - logs_backup : Name of the snapshot file
Time to sync up all the list of snapshots. If all our settings have been sync'd up just fine; we should end up with a list of indices, close to that of what is shown below:


Step4: Creating a Snapshot:


URL: http://localhost:9200/_snapshot/logs_backup/type_of_the_backup?wait_for_completion=true

                "indices": "logstash-2017.11.21",
                "include_global_state": false,
                "compress": true,
                "encrypt": true

In the URL:
       - logs_backup : Name of the snapshot file
       - type_of_the_backup : Could be any string
In the payload JSON:
        - indices : Correspond to the index which is to be backed-up to S3 bucket. In case of multiple indices to back up under a single restoration point, the indices can be entered in the form of an array.
        - include_global_state : set to 'false' just to make sure there's a cross-versioin compatibility. WARNING: If set to 'true', the index can be restored only to the ES of the source version.
        - compress : enables compression of the index meta files backed up to S3.
        - encrypt : In case if extra encryption on the indices is necessary.

This should give a response as, '{"acknowledged": "true"}'

Step5: Restoring a Snapshot:


URL: http://localhost:9200/_snapshot/name_of_the_backup/index_to_be_restored/_restore

                "ignore_unavailable": true,
                "include_global_state": false

In the URL:
       - logs_backup : Name of the snapshot file
       - index_to_be_restored : Any of the index from the id listed in Step:3

In the payload JSON:
        - ignore_unavailable : It's safe to set this to true, to avoid unwanted checks.
        - include_global_state : set to 'false' just to make sure there's a cross-versioin compatibility. WARNING: If set to 'true', the index can be restored only to the ES of the source version.

This should give a response as, '{"acknowledged": "true"}'

Et Voila!  The restoration is complete.

And Don't forget to recycle the space corresponding to the index by safely deleting it - Reuse, Reduce & Recycle :)

Happy Wrangling!!!


Popular posts from this blog

Flyway - Database Migrations made easy & How not to accidentally Roleback all of your migrations

Flyway - by boxfuse: Is a schema migration tool and it acts more of like a version control for your relational databases.

If you are manually executing your sql scripts or if your administrator is manually executing the sql scripts, on your production or UAT environment, you definitely need this tool to be setup in all of your environments.

Before we proceed:

Statutory Warning: 

Never ever execute the following command, be it your production or UAT environment:

$ flyway clean   # Do not execute this, ever!!!!

Wondering what it does? It roles back whatever table migrations/changes you have done through flyway, along with their data. 

In short, Don't ever execute this command.

Now that we are done with all the warnings:

Installation:It is fairly straight forward:
Run the above command in a shell prompt.
Running the above creates a directory called as flyway-x.x.x/
Inside this directory are many other directories of which, the two most import directories are:
 conf/ - Configuration for eac…

ELK Stack... Not!!! FEK, it is.!!! Fluentd, Elasticsearch & Kibana

If you are here, you probably know what elasticsearch is and at some point, trying to get into the mix. You were searching for the keywords "logging and elasticsearch" or perhaps, "ELK"; and probably ended up here. Well, you might have to take the following section with a pinch of salt, especially the "ELK Stack"  fam.
At least from my experience, working for start-ups teaches oneself, a lot of lessons and one of the vast challenges include minimizing the resource utilization bottlenecks. On one hand, the logging and real-time application tracking is mandatory; while on the the other hand, there's a bottle neck in the allocated system resource, which is probably an amazon EC2 instance with 4Gigs of RAM.
ELK Stack 101: Diving in, ELK => Elasticsearch, Logstash and Kibana. Hmm, That doesn't add up; don't you think? Elasticsearch stores the reformed log inputs, Logstash chops up the textual logs and transforms them to facilitate query, deriva…