Docker CE behind Rancher ROCKS!

Straight to the point!

Lunch an Ubuntu in AWS. Mine was Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1022-aws x86_64).

sudo apt-get remove docker docker-engine docker.io
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
sudo apt-get update
sudo apt-get install -y docker-ce=17.06.1~ce-0~ubuntu

Easy! Right? Please note you can remove the version so you get the latest stable version but it is not recommended for production environments. So you better specify what version you want to install and do not leave it to install the latest.

Moving on to Rancher!

docker run -d -p 8080:8080 rancher/server:stable

Continue reading

Advertisements

Installing and using Solr 6.1 cloud in AWS EC2 (with some note on upgrading)

Like any company, we also have some legacy codes. Our codes were using Solr 3 and I was going to upgrade it to the latest (6.1). The upgrading itself is not such a big deal, just fire up a new setup convert the old schema type to the new schema type which only differs in XML formats. I am not going through that as you can easily get sample schema format from latest version and just compare it to your schema. Once done you can start the new solr with your old schema and it will start giving errors!! but with patience and hard work you can resolve them one by one.

Anyway, the upgrade process is not such a big deal but working with new solr is. Specially if you want to use the cloud version which uses zookeeper to manage the configs, shards, replications, leaders and etc. All you might come on your way is some depreciated class or missing class which you can download.

In my case I found this page very useful to find the deprecated classes of Solr 3.6.

Before I jump on Solr cloud 6.1 you may need to know some concepts:

  1. Collection: A single search index.
  2. Shard: A logical section of a single collection (also called Slice). Sometimes people will talk about “Shard” in a physical sense (a manifestation of a logical shard). Shard is literally the parts of your data. It means if you have 3 shards then all your data (documents) are distributed in 3 parts. It also means if one of the shards is missing then you are in trouble!!
  3. Replica: A physical manifestation of a logical Shard, implemented as a single Lucene index on a SolrCore. Replica is the replication of the shards! so if you have replication factor of 2 then you will have 2 copy of each shard.
  4. Leader: One Replica of every Shard will be designated as a Leader to coordinate indexing for that Shard. Leader is the master node in a shard. So if you have to replicas, then the master one is the boss!
  5. SolrCore: Encapsulates a single physical index. One or more make up logical shards (or slices) which make up a collection.
  6. Node: A single instance of Solr. A single Solr instance can have multiple SolrCores that can be part of any number of collections.
  7. Cluster: All of the nodes you are using to host SolrCores.

In continue, I will go through installing and using this whole setup.

Continue reading

AI, where to begin?

 

 

If you have any interest on AI and related topics you would know the amount of information out there is huge and it drove me crazy reading all the info with nowhere to begin!! So I made this summary for people who are lost and looking for a simple beginning. I will keep updating this post if I find new info:

Continue reading

Exporting Cassandra 2.2 to Google BigQuery

So we decide to move 5 years of data from Apache Cassandra to Google BigQuery. The problem was not just transferring the data or export/import, the issue was the very old Cassandra!

After extensive research, we have planned the migration to export data to csv and then upload in Google Cloud Storage for importing in Big Query.

The pain was the way Cassandra 1.1 deal with large number of records! There is no pagination so at some point your gonna run out of something! If not mistaken, pagination is introduced since version 2.2.

After all my attempts to upgrade to latest version 3.4 failed I decide to try other versions and luckily the version 2.2 worked! By working I mean I were able to follow the upgrading steps to end and the data were accessible.

Because I could not get any support for direct upgrade and my attempts to simply upgrade to 2.2 also failed. So I had no choice but to upgrade to 2.0 and then upgrade it to 2.2. Because this is extremely delicate task I rather just forward you to official website and only then give you the summary. Please make sure you check docs.datastax.com and follow their instructions.

To give an overview, you are going to do these steps:

  1. Making sure all nodes are stable and there is no dead nodes.
  2. Make backup (your SSTables, configurations and etc)
  3. It is very important to successfully upgrade your SSTable before proceeding to next step. Simply use
    nodetool upgradesstables
  4. Drain the nodes using
    nodetool drain
  5. Then simply stop the node
  6. Install the new version (I will explain fresh installation later in this document)
  7. Simply do the config as your old Cassandra, start it and upgradesstables again (as in step 3) for each node.

Installing Cassandra:

  1. Edit /etc/yum.repos.d/datastax.repo
  2. [datastax]
    name = DataStax Repo for Apache Cassandra
    baseurl = https://rpm.datastax.com/community
    enabled = 1
    gpgcheck = 0
    
  3. And then install and start the service:
  4. yum install dsc20
    service cassandra start
    

Once you are upgrade to Cassandra 2+ you can export the data to csv without having pagination or crashing issue.

Just for the records, a few commands to get the necessary information about the data structure is as follow:

cqlsh -u username -p password
describe tables;
describe table abcd;
describe schema;

And once we know the tables we want to export we just use them alongside its keyspace. First add all your commands in one file to create a batch.

vi commands.list

For example a sample command to export one table:

COPY keyspace.tablename TO '/backup/export.csv';

And finally run the commands from the file:

cqlsh -u username -p password -f /backup/commands.list

So by now, you have exported the tables to csv file(s). All you need to do now is uploading the files to Google Cloud Storage:

gsutil rsync /backup gs://bucket

Later on you can use Google API to import the csv files to Google BigQuery. You may check out the Google documentations for this in cloud.google.com

Linux restricted administration

One of the challenges when adding a user in Linux environment is when you need to precisely define what they can or can not do. Some may find configuring authorisation in Linux a bit complicated. In my case I needed to add a user with privilege to execute some commands with sudo but without full root access.

The first thing is to create the user we want to customise:
adduser jack

and then create the key and later pass the id_rsa to him:

ssh-keygen -t dsa
mkdir jack/.ssh
chmod 700 jack/.ssh/
cp id_rsa.pub jack/.ssh/authorized_keys
chmod 600 jack/.ssh/authorized_keys
chown -R jack:jack jack/.ssh

Then you should edit the sudoers:

visudo -f /etc/sudoers.d/developers-configs
User_Alias DEVELOPERS = jack,john

Cmnd_Alias SEELOGS = /usr/bin/tail /var/log/nginx/*.error, /usr/bin/tail /var/log/nginx/*.access, /bin/grep * /var/log/nginx/*.error, /bin/grep * /var/log/nginx/*.access

Cmnd_Alias EDITCONFIGS = /bin/vi /etc/nginx/site.d/*.conf, /usr/bin/nano /etc/nginx/site.d/*.conf, /bin/cat /etc/nginx/site.d/*.conf

Cmnd_Alias RESTARTNGINX = /sbin/service nginx status, /sbin/service php-fpm status, /sbin/service nginx restart, /sbin/service nginx configtest, /sbin/service php-fpm restart

DEVELOPERS ALL = NOPASSWD: SEELOGS,EDITCONFIGS,RESTARTNGINX

We just create DEVELOPERS as alias for users and SEELOGS, EDITCONFIGS, RESTARTNGINX as alias of commands the user can excute; and then assigned SEELOGS, EDITCONFIGS and RESTARTNGINX privilages to DEVELOPERS. If you want users to be prompted for password you can remove the “NOPASSWD:” part.

Please note that depends on your OS you may need to add the user in “/etc/ssh/sshd_config” … for example “AllowUsers jack”.

Free secure backup in MEGA

http://mega.nz/ is a secured cloud storage that gives away up to 50GB free space. Using this service is recommended due to very tight security as even the user will not be able to gain access to data if he lose the password (and lose the recovery key).

 

Mega provided some scripts for uploading, syncing and etc to their cloud. This is specially useful when it comes to cheap secure backup of your files. All you need to do is creating a free account for beginning and perhaps purchase a premium account for a better service.

First it is just appropriate to setup proper locale variables:

localedef -i en_US -f UTF-8 en_US.UTF-8

And then setup the dependencies (Amazon Linux):

yum groupinstall 'Development Tools'
yum install glib* openssl-devel libcurl-devel libproxy gnutls
rpm -i ftp://rpmfind.net/linux/centos/6.7/os/x86_64/Packages/glib-networking-2.28.6.1-2.2.el6.x86_64.rpm
yum update kernel
yum update

OK! So now we have all the required libraries in place, we can proceed to megatools installation:

wget https://megatools.megous.com//builds/megatools-1.9.95.tar.gz
tar xvf megatools-1.9.95.tar.gz 
cd megatools-1.9.95
./configure
make

In case you don’t want to pass the account username password with each command we can just save it PLAINTEXT in a file (it is safer from some aspects but it has the risk of unauthorised access to this file)

vi /root/.megarc:
[Login]
Username = your-email
Password = your-password

Just to test, run mega disk free script which gives you some info about space you have on the cloud:
./megadf

You can find some more commands details @https://megatools.megous.com/man/megatools.html

And at the end some examples:
./megaput file_to_upload
./megaget /Root/file_to_download

Monitoring SMTP server (using mailgraph, qshape and postqueue

Monitoring SMTP mails never been easier! You can check the number of sent emails, bounced emails, rejected emails and etc (you can see demo at http://www.stat.ee.ethz.ch/mailgraph.cgi).

Let’s get the source:

wget http://mailgraph.schweikert.ch/pub/mailgraph-1.14.tar.gz
tar xf mailgraph-1.14.tar.gz
mv mailgraph-1.14 /var/mailgraph

and install dependencies:

yum install perl-rrdtool
yum install perl-File-Tail

and run the mailgraph once to fetch the previous logs:

./mailgraph.pl -l /var/log/maillog -c -v

then run it as service:

cp mailgraph.pl /usr/local/bin/
./mailgraph-init start

Now the cgi file is needed to be executed by web server so once we installed the web server we will configure CGI configs. The main changes are AddHandler, Add ExecCGI to “/var/mailgraph” directory and add mailgraph.cgi as an index file like index.php.

yum install httpd
cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.original
vi /etc/httpd/conf/httpd.conf (appendix 01)
service http start
chkconfig httpd on

Now you should be able to browse the web server and see the stats. I have to note that the stats are not getting updated frequently. I coolant figure out how frequent this is getting updated but sometimes I had to run the mailgraph script manually to get latest stats. At this moment we are done with mailgraph.

Now we just take a look at shape and other CLI tools for monitoring SMTP. qshape and post queue are two useful scripts found in postfix additional scripts. We need to install it first:

yum install postfix-perl-scripts

And here are some example of the tool to get the stat of different queues:

qshape hold
qshape deferred
qshape active

postqueue is another tool:

postqueue -p
postqueue -p | egrep -c "^[0-9A-F]{10}[*]"
postqueue -p | egrep -c "^[0-9A-F]{10}[^*]"

we also can use the maillot directly with assistance of grep:

grep -c "postfix/smtp.*status=sent" /var/log/maillog 
grep -c "postfix/smtp.*status=bounced" /var/log/maillog

Continue reading