Free secure backup in MEGA

http://mega.nz/ is a secured cloud storage that gives away up to 50GB free space. Using this service is recommended due to very tight security as even the user will not be able to gain access to data if he lose the password (and lose the recovery key).

 

Mega provided some scripts for uploading, syncing and etc to their cloud. This is specially useful when it comes to cheap secure backup of your files. All you need to do is creating a free account for beginning and perhaps purchase a premium account for a better service.

First it is just appropriate to setup proper locale variables:

localedef -i en_US -f UTF-8 en_US.UTF-8

And then setup the dependencies (Amazon Linux):

yum groupinstall 'Development Tools'
yum install glib* openssl-devel libcurl-devel libproxy gnutls
rpm -i ftp://rpmfind.net/linux/centos/6.7/os/x86_64/Packages/glib-networking-2.28.6.1-2.2.el6.x86_64.rpm
yum update kernel
yum update

OK! So now we have all the required libraries in place, we can proceed to megatools installation:

wget https://megatools.megous.com//builds/megatools-1.9.95.tar.gz
tar xvf megatools-1.9.95.tar.gz 
cd megatools-1.9.95
./configure
make

In case you don’t want to pass the account username password with each command we can just save it PLAINTEXT in a file (it is safer from some aspects but it has the risk of unauthorised access to this file)

vi /root/.megarc:
[Login]
Username = your-email
Password = your-password

Just to test, run mega disk free script which gives you some info about space you have on the cloud:
./megadf

You can find some more commands details @https://megatools.megous.com/man/megatools.html

And at the end some examples:
./megaput file_to_upload
./megaget /Root/file_to_download

Basics of using Elasticsearch

To see the available indices:

curl 'localhost:9200/_cat/indices?v'

Some more example for seeing Logstash indices:

curl -XGET 'http://localhost:9200/_mapping'
curl -XGET 'http://localhost:9200/logstash-*/log/_mapping'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/_search?q=_type:logs&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/_search?q=type:syslog&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/syslog/_search?q=type:syslog&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.12/loadbalancer/_search?q=tags:_grokparsefailure&pretty=1'

Deleting indices by name or based on query:

curl -XDELETE 'http://localhost:9200/index_name/'
curl -XDELETE 'http://localhost:9200/logstash-*/_query?q=facility_label:user-level&pretty=1'
curl -XDELETE 'http://localhost:9200/logstash-2015.01.12/loadbalancer/_query?q=tags:_grokparsefailure&pretty=1'

Backing up ES: We need a directory with full access to elasticsearch. I actually not sure if this is necessary or not but this is one of the actions I took to solve a bad problem!

sudo mkdir /usr/share/elasticsearch/backup/
sudo chmod 777 /usr/share/elasticsearch/backup/
sudo chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/backup/

And finally make a snapshot:

curl -XPUT 'http://localhost:9200/_snapshot/dopey_backup' -d '{
    "type": "fs",
    "settings": {
        "compress" : true,
        "location": "/usr/share/elasticsearch/backup"
    }
}'

Restoring the snapshot is also essential:

curl -XPUT "localhost:9200/_snapshot/dopey_backup/snapshot_1?wait_for_completion=true"
curl -XPOST "localhost:9200/_snapshot/dopey_backup/snapshot_1/_restore"