Docker CE behind Rancher ROCKS!

Straight to the point!

Lunch an Ubuntu in AWS. Mine was Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1022-aws x86_64).

sudo apt-get remove docker docker-engine docker.io
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
sudo apt-get update
sudo apt-get install -y docker-ce=17.06.1~ce-0~ubuntu

Easy! Right? Please note you can remove the version so you get the latest stable version but it is not recommended for production environments. So you better specify what version you want to install and do not leave it to install the latest.

Moving on to Rancher!

docker run -d -p 8080:8080 rancher/server:stable

Continue reading

Advertisements

Installing and using Solr 6.1 cloud in AWS EC2 (with some note on upgrading)

Like any company, we also have some legacy codes. Our codes were using Solr 3 and I was going to upgrade it to the latest (6.1). The upgrading itself is not such a big deal, just fire up a new setup convert the old schema type to the new schema type which only differs in XML formats. I am not going through that as you can easily get sample schema format from latest version and just compare it to your schema. Once done you can start the new solr with your old schema and it will start giving errors!! but with patience and hard work you can resolve them one by one.

Anyway, the upgrade process is not such a big deal but working with new solr is. Specially if you want to use the cloud version which uses zookeeper to manage the configs, shards, replications, leaders and etc. All you might come on your way is some depreciated class or missing class which you can download.

In my case I found this page very useful to find the deprecated classes of Solr 3.6.

Before I jump on Solr cloud 6.1 you may need to know some concepts:

  1. Collection: A single search index.
  2. Shard: A logical section of a single collection (also called Slice). Sometimes people will talk about “Shard” in a physical sense (a manifestation of a logical shard). Shard is literally the parts of your data. It means if you have 3 shards then all your data (documents) are distributed in 3 parts. It also means if one of the shards is missing then you are in trouble!!
  3. Replica: A physical manifestation of a logical Shard, implemented as a single Lucene index on a SolrCore. Replica is the replication of the shards! so if you have replication factor of 2 then you will have 2 copy of each shard.
  4. Leader: One Replica of every Shard will be designated as a Leader to coordinate indexing for that Shard. Leader is the master node in a shard. So if you have to replicas, then the master one is the boss!
  5. SolrCore: Encapsulates a single physical index. One or more make up logical shards (or slices) which make up a collection.
  6. Node: A single instance of Solr. A single Solr instance can have multiple SolrCores that can be part of any number of collections.
  7. Cluster: All of the nodes you are using to host SolrCores.

In continue, I will go through installing and using this whole setup.

Continue reading

AI, where to begin?

 

 

If you have any interest on AI and related topics you would know the amount of information out there is huge and it drove me crazy reading all the info with nowhere to begin!! So I made this summary for people who are lost and looking for a simple beginning. I will keep updating this post if I find new info:

Continue reading

Exporting Cassandra 2.2 to Google BigQuery

So we decide to move 5 years of data from Apache Cassandra to Google BigQuery. The problem was not just transferring the data or export/import, the issue was the very old Cassandra!

After extensive research, we have planned the migration to export data to csv and then upload in Google Cloud Storage for importing in Big Query.

The pain was the way Cassandra 1.1 deal with large number of records! There is no pagination so at some point your gonna run out of something! If not mistaken, pagination is introduced since version 2.2.

After all my attempts to upgrade to latest version 3.4 failed I decide to try other versions and luckily the version 2.2 worked! By working I mean I were able to follow the upgrading steps to end and the data were accessible.

Because I could not get any support for direct upgrade and my attempts to simply upgrade to 2.2 also failed. So I had no choice but to upgrade to 2.0 and then upgrade it to 2.2. Because this is extremely delicate task I rather just forward you to official website and only then give you the summary. Please make sure you check docs.datastax.com and follow their instructions.

To give an overview, you are going to do these steps:

  1. Making sure all nodes are stable and there is no dead nodes.
  2. Make backup (your SSTables, configurations and etc)
  3. It is very important to successfully upgrade your SSTable before proceeding to next step. Simply use
    nodetool upgradesstables
  4. Drain the nodes using
    nodetool drain
  5. Then simply stop the node
  6. Install the new version (I will explain fresh installation later in this document)
  7. Simply do the config as your old Cassandra, start it and upgradesstables again (as in step 3) for each node.

Installing Cassandra:

  1. Edit /etc/yum.repos.d/datastax.repo
  2. [datastax]
    name = DataStax Repo for Apache Cassandra
    baseurl = https://rpm.datastax.com/community
    enabled = 1
    gpgcheck = 0
    
  3. And then install and start the service:
  4. yum install dsc20
    service cassandra start
    

Once you are upgrade to Cassandra 2+ you can export the data to csv without having pagination or crashing issue.

Just for the records, a few commands to get the necessary information about the data structure is as follow:

cqlsh -u username -p password
describe tables;
describe table abcd;
describe schema;

And once we know the tables we want to export we just use them alongside its keyspace. First add all your commands in one file to create a batch.

vi commands.list

For example a sample command to export one table:

COPY keyspace.tablename TO '/backup/export.csv';

And finally run the commands from the file:

cqlsh -u username -p password -f /backup/commands.list

So by now, you have exported the tables to csv file(s). All you need to do now is uploading the files to Google Cloud Storage:

gsutil rsync /backup gs://bucket

Later on you can use Google API to import the csv files to Google BigQuery. You may check out the Google documentations for this in cloud.google.com

Pastejacking: what if what you paste is not what you copied!

Those little javascript codes in websites that no one ever check can push notifications and get geolocation with your permission; it can also store files in your cache, open windows, log keystrokes, follow your mouse movements andoverride your clipboard without your permission!

Well, the issue here is that you can not be sure of what you have in your clipboard! I can think of 2 case scenarios that this can be a security issue:

  • When a normal user copy the content from websites and paste directly into a vulnerable software (Microsoft Word?!) and the copied contents simply trigger the vulnerability.
  • When an admin copy and paste some command directly from a website (tutorials?!) into their terminal! This is the creepy one cause depends on the privilege of admin (duh!!) it can download and execute scripts; and then to make things looks OK performs a cleanup and probably do what it supposed to do !!

If you are thinking of disabling javascript, CSS (Cascading Style Sheets) can also be used to hide some contents among what you are copying! The problem with CSS is that you can not be sure what you are copying!

The solution is not that difficult though: just be aware of what you paste, where you paste and think twice before you paste!! Perhaps just paste in a notepad first, or just get a clipboard manager!

P/S: You can find a simple demonstration here:

GitHub – dxa4481/Pastejacking: A demo of overriding what’s in a person’s clipboard

VirusTotal and changes in endpoint security

As you may know, VirusTotal is a Google owned company with huge resources where you can simply get the result of checking a file against multiple AntiVirus (AV) engines.

I recently get to know an announcement from VirusTotal (VT) that probably affects many endpoint security companies. The fact is that VT provides a rich API that enable almost anyone to build their own AV. This easily could be misused by startups endpoint security companies who simply did not have a proper engine but thanks to VT (and hard work of all the powerful AV engines in VT) could simply get a very good detection rate.

During my experience with AV engines it always bothered me that how easy it is to fool everyone without even having a proper engine! Well, now with recent announcement I think this issue is resolved as it strictly forces AV companies to share their engines with VT if they are going to use community’s results.

Perhaps we could expect some of the wrongly praised companies go down since they are no longer able to access VTs results; and most probably have no presentable AV engine to share with the community in return!

Do we take security seriously?

No! As a matter of fact no one does and that is why even the biggest enterprises get hacked at least once! Security is like a submarine that even smallest holes can let the water to get in the submarine.

I just went through a very interesting attack. It is interesting not because of any complicated attack but the target of the attack! This is actually an old news but interesting to find the details of how Hacking-Team got hacked!

You can find it here: https://pastebin.com/raw/0SNSvyjJ

What have the target done wrong? I would say not taking the security seriously. This is a little reminder that security is a culture and no matter how much we know about it we may actually get hacked (or in this case assist the hack) with our rookie mistakes. You can go through it and see the main defence of the target was their network edge and once the hacker passed it, no more serious security exist!