Installing and using Solr 6.1 cloud in AWS EC2 (with some note on upgrading)

Like any company, we also have some legacy codes. Our codes were using Solr 3 and I was going to upgrade it to the latest (6.1). The upgrading itself is not such a big deal, just fire up a new setup convert the old schema type to the new schema type which only differs in XML formats. I am not going through that as you can easily get sample schema format from latest version and just compare it to your schema. Once done you can start the new solr with your old schema and it will start giving errors!! but with patience and hard work you can resolve them one by one.

Anyway, the upgrade process is not such a big deal but working with new solr is. Specially if you want to use the cloud version which uses zookeeper to manage the configs, shards, replications, leaders and etc. All you might come on your way is some depreciated class or missing class which you can download.

In my case I found this page very useful to find the deprecated classes of Solr 3.6.

Before I jump on Solr cloud 6.1 you may need to know some concepts:

  1. Collection: A single search index.
  2. Shard: A logical section of a single collection (also called Slice). Sometimes people will talk about “Shard” in a physical sense (a manifestation of a logical shard). Shard is literally the parts of your data. It means if you have 3 shards then all your data (documents) are distributed in 3 parts. It also means if one of the shards is missing then you are in trouble!!
  3. Replica: A physical manifestation of a logical Shard, implemented as a single Lucene index on a SolrCore. Replica is the replication of the shards! so if you have replication factor of 2 then you will have 2 copy of each shard.
  4. Leader: One Replica of every Shard will be designated as a Leader to coordinate indexing for that Shard. Leader is the master node in a shard. So if you have to replicas, then the master one is the boss!
  5. SolrCore: Encapsulates a single physical index. One or more make up logical shards (or slices) which make up a collection.
  6. Node: A single instance of Solr. A single Solr instance can have multiple SolrCores that can be part of any number of collections.
  7. Cluster: All of the nodes you are using to host SolrCores.

In continue, I will go through installing and using this whole setup.

Continue reading

Exporting Cassandra 2.2 to Google BigQuery

So we decide to move 5 years of data from Apache Cassandra to Google BigQuery. The problem was not just transferring the data or export/import, the issue was the very old Cassandra!

After extensive research, we have planned the migration to export data to csv and then upload in Google Cloud Storage for importing in Big Query.

The pain was the way Cassandra 1.1 deal with large number of records! There is no pagination so at some point your gonna run out of something! If not mistaken, pagination is introduced since version 2.2.

After all my attempts to upgrade to latest version 3.4 failed I decide to try other versions and luckily the version 2.2 worked! By working I mean I were able to follow the upgrading steps to end and the data were accessible.

Because I could not get any support for direct upgrade and my attempts to simply upgrade to 2.2 also failed. So I had no choice but to upgrade to 2.0 and then upgrade it to 2.2. Because this is extremely delicate task I rather just forward you to official website and only then give you the summary. Please make sure you check docs.datastax.com and follow their instructions.

To give an overview, you are going to do these steps:

  1. Making sure all nodes are stable and there is no dead nodes.
  2. Make backup (your SSTables, configurations and etc)
  3. It is very important to successfully upgrade your SSTable before proceeding to next step. Simply use
    nodetool upgradesstables
  4. Drain the nodes using
    nodetool drain
  5. Then simply stop the node
  6. Install the new version (I will explain fresh installation later in this document)
  7. Simply do the config as your old Cassandra, start it and upgradesstables again (as in step 3) for each node.

Installing Cassandra:

  1. Edit /etc/yum.repos.d/datastax.repo
  2. [datastax]
    name = DataStax Repo for Apache Cassandra
    baseurl = https://rpm.datastax.com/community
    enabled = 1
    gpgcheck = 0
    
  3. And then install and start the service:
  4. yum install dsc20
    service cassandra start
    

Once you are upgrade to Cassandra 2+ you can export the data to csv without having pagination or crashing issue.

Just for the records, a few commands to get the necessary information about the data structure is as follow:

cqlsh -u username -p password
describe tables;
describe table abcd;
describe schema;

And once we know the tables we want to export we just use them alongside its keyspace. First add all your commands in one file to create a batch.

vi commands.list

For example a sample command to export one table:

COPY keyspace.tablename TO '/backup/export.csv';

And finally run the commands from the file:

cqlsh -u username -p password -f /backup/commands.list

So by now, you have exported the tables to csv file(s). All you need to do now is uploading the files to Google Cloud Storage:

gsutil rsync /backup gs://bucket

Later on you can use Google API to import the csv files to Google BigQuery. You may check out the Google documentations for this in cloud.google.com

WPA/WPA2 Cracking with GPU in AWS

DISCLAIMER: The information provided on this post is to be used for educational purposes only. The website creator is in no way responsible for any misuse of the information provided. All of the information in this website is meant to help the reader develop a hacker defence attitude in order to prevent the attacks discussed. In no way should you use the information to cause any kind of damage directly or indirectly. You implement the information given at your own risk.

In this post we are going to realise how practical it is to perform a brute force attack on a WPA or WPA2 captured handshake. A couple of years ago WPA/WPA2 considered secure but with the current power and cost of cloud computing anyone with slightest interest can setup a super fast server for brute force attempts with very cheap price (as low as $0.6 per hour!).

I am going to walk through my experiment and share the details and results with you. There are dozens of tutorials for this out there but this is just my own little experiment.

Brute forcing a WPA or WPA2 password begins with capturing the 4way handshake of the target WiFi. I am not going to go there as you can find a lot of solutions for that! I can only mention Kali toolbox which provides you the tools. So we will assume you got the WPA 4way handshake in handshake.cap file.

Continue reading

Deploying SMTP server using Postfix and OpenDKIM

Days ago we got some issue with Amazon SES and decide to make our own SMTP relay service. I tried the combinations of sendmail with dim-milter but for some reason I could not make it work so I start another server from scratch which did work this time. This post mainly focus on configuring OpenDKIM as Postfix is fairly straightforward.

The first thing to do is disabling sendmail which is the default SMTP client on Amazon Linux:

service sendmail stop
chkconfig sendmail off

And then installing Postfix and configuring it:

yum install postfix
cp /etc/postfix/main.cf /etc/postfix/main.cf.original 
vi /etc/postfix/main.cf (appendix 01)

Configuration mainly involves adding DKIM settings (such as milter socket info) and modifying receptions restrictions. In the new setting we define sender_access file to contain the list of senders who can relay through our SMTP service.

cat "DOMAIN.COM OK" >> /etc/postfix/sender_access
postmap /etc/postfix/sender_access

Now we are done with Postfix so it is better to check by sending a test mail.

service postfix start
chkconfig postfix on

The main concern of this post, DKIM, is to cryptographically validate the sender is really from that domain (i.e. domain.com). The validation process starts with the SMTP server signing the email using its private key and then the destination mail server tries to match the private key with the public key obtained from senders’ claimed domain DNS. Once the private and public key matched then it means the SMTP server is from the domain it claims to be.

To install the DKIM we need some API from sendmail and openssl:

yum install sendmail-devel openssl-devel

openDKIM is not available in default repository so we will add it and then install it:

rpm -Uvh http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm
yum --disablerepo=* --enablerepo=epel install opendkim

Once installation finished we proceed to configurations. There are 3 main files to configure: opendkim.conf contains the main configs such as address of signing table file, key table file and the socket address to listen. The key table file contains the list of keys. The signing table defines which domains should be signed by which key. You will also need to add trusted IP addresses of senders in /etc/opendkim/TrustedHosts to grant them access to SMTP.

cp /etc/opendkim.conf /etc/opendkim.conf.original
vi /etc/opendkim.conf (appendix 02)
vi /etc/opendkim/KeyTable  (appendix 03)
vi /etc/opendkim/SigningTable (appendix 04)

Now we need to generate a pair of keys (private and public) which the public key will be added into DNS records of send domain:

mkdir /etc/opendkim/keys/DOMAIN.COM
opendkim-genkey -D /etc/opendkim/keys/DOMAIN.COM/ -d DOMAIN.COM -s default
mv /etc/opendkim/keys/DOMAIN.COM/default.private /etc/opendkim/keys/DOMAIN.COM/default
chown -R opendkim:opendkim /etc/opendkim/keys/DOMAIN.COM
cat /etc/opendkim/keys/DOMAIN.COM/default.txt

Then start the service:

service opendkim start
chkconfig opendkim on

Finally, you have to add a TXT record in your DNS dashboard. The record name should be default._domainkey.DOMAIN.COM and it should contains something like the following (based on the /etc/opendkim/keys/DOMAIN.COM/default.txt):
“v=DKIM1; g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHY7Zl+n3SUldTYRUEU1BErHkKN0Ya52gazp1R7FA7vN5RddPxW/sO9JVRLiWg6iAE4hxBp42YKfxOwEnxPADbBuiELKZ2ddxo2aDFAb9U/lp47k45u5i2T1AlEBeurUbdKh7Nypq4lLMXC2FHhezK33BuYR+3L7jxVj7FATylhwIDAQAB”

/etc/postfix/main.cf (appendix 01)

smtpd_milters           = inet:127.0.0.1:2525
non_smtpd_milters       = $smtpd_milters
milter_default_action   = accept

smtpd_error_sleep_time = 1s
smtpd_soft_error_limit = 10
smtpd_hard_error_limit = 20

smtpd_recipient_restrictions = 
	 permit_mynetworks
	 check_sender_access hash:/etc/postfix/sender_access
	 reject_unauth_destination

queue_directory = /var/spool/postfix
command_directory = /usr/sbin
daemon_directory = /usr/libexec/postfix
data_directory = /var/lib/postfix
mail_owner = postfix

inet_interfaces = all
inet_protocols = all

mydestination = $myhostname, localhost.$mydomain, localhost
unknown_local_recipient_reject_code = 550

alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

debug_peer_level = 2
debugger_command =
	 PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
	 ddd $daemon_directory/$process_name $process_id & sleep 5

sendmail_path = /usr/sbin/sendmail.postfix
newaliases_path = /usr/bin/newaliases.postfix
mailq_path = /usr/bin/mailq.postfix
setgid_group = postdrop
html_directory = no

manpage_directory = /usr/share/man
sample_directory = /usr/share/doc/postfix-2.6.6/samples
readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES

/etc/opendkim.conf (appendix 02)

Canonicalization        relaxed/relaxed
ExternalIgnoreList      refile:/etc/opendkim/TrustedHosts
InternalHosts           refile:/etc/opendkim/TrustedHosts
KeyTable                refile:/etc/opendkim/KeyTable
LogWhy                  Yes
MinimumKeyBits          1024
Mode                    sv
PidFile                 /var/run/opendkim/opendkim.pid
SigningTable            refile:/etc/opendkim/SigningTable
Socket                  inet:2525@127.0.0.1
Syslog                  Yes
SyslogSuccess           Yes
TemporaryDirectory      /var/tmp
UMask                   022
UserID                  opendkim:opendkim

/etc/opendkim/KeyTable (appendix 03)

default._domainkey.DOMAIN.COM:default:/etc/opendkim/keys/DOMAIN.COM/default

/etc/opendkim/SigningTable (appendix 04)

*@DOMAIN.COM default._domainkey.DOMAIN.COM

How did I setup a wordpress hosting in AWS using nginx+php-fpm

Well, for a long time I have been looking for the perfect WordPress hosting setup. I end up using the combination of nginx, php-fpm and memcached. The other option is using apache and php while the difference is the way these two web server handle request and use php. As far as I understand, nginx has a more simple way of dealing with php using php-fpm and the way it handle modules and caching boosts the performance.

So even though the nginx configuration is like a nightmare I start giving it a try and after so many challenges I end up making it working! Obviously, the apache configuration is much simpler but if you need to satisfy millions of users then nginx can be a better option.

In my experience I used an EC2 t2.medium instance (just for 5 hosts) and Amazon Linux AMI as OS. So now we are going to install the services and modules before starting the configurations.

First we install the followings and create a cache directory for nginx fast-cgi:

yum install nginx
yum install php56-fpm
mkdir -p /var/cache/nginx/
yum install php56-mysqlnd #if needed
yum install memcached #if needed

(I assumed your MySQL is hosted in another server.)

And then we create the root directory for websites with a simple default html file. The nginx process is the owner along appropriate permissions cause no one supposed to change it except us!

mkdir -p /var/nginx/sites/default
echo 'You should not be here!' > /var/nginx/sites/default/index.html
chown -R nginx:nginx /var/nginx/sites/default
chmod -R 701 /var/nginx/sites/default 

and then making the website root directory with right owner and permissions (the ftp-user group contains all users for that group and all of them only are able to access to this directory).

mkdir -p /var/nginx/sites/abcd.com
chown -R nginx:abc-ftp /var/nginx/sites/abcd.com
chmod -R 711 /var/nginx/sites/abcd.com 

Appendix 1 and 3 should be available in proper directory and appendix 2 template should be used for adding additional websites to host.

And finally test the configurations: service nginx configtest

Now it’s time to start the engine! service nginx start; service php-fpm start;

now copy your wordpress source in /var/nginx/sites/abcd.com/ folder and you should be able to browse your website(s) and just use the wizard to setup database connection.

Once you entered the wordpress admin panel you can install W2 Total Cache plugins which can be configured to use memcached for a boost in performance. After all the performance is mostly about caching so maybe in another post I explain more about wordpress caching options.

Please remember to change the user and group in /etc/php-fpm.d/www.conf file:
user = nginx
group = nginx

Continue reading

Adding Network Address Translation (NAT) to Amazon private VPC

Assuming that the VPC is ready and there is one public subnet and one private subnet.

Just add an instance (I used Amazon Linux) in public subnet and all incoming/outgoing traffic. It is important to disable source/destination check on that instance (right click on the EC2 instance and you will see it).

Next you need to SSH into the NAT server and run the following commands:

sudo sysctl -q -w net.ipv4.ip_forward=1 net.ipv4.conf.eth0.send_redirects=0
sudo iptables -t nat -A POSTROUTING -o eth0 -s 10.0.0.0/16 -j MASQUERADE

And then test it they are all set:

sudo iptables -n -t nat -L POSTROUTING
sysctl net.ipv4.ip_forward
sysctl net.ipv4.conf.eth0.send_redirects

In the end, go back to AWS console. Go to VPC service and select the route table that is associated with the private network. Then change the default route (0.0.0.0/0) to the NAT instance.

Now you are good to go! All instances in your private subnet have internet access now.

Extending Linux physical partition in AWS EC2

Once I got to the problem that an old server migrated to AWS had used 100% of available storage. Lucky me, it was in AWS EC2 and I were able to simply create a snapshot of the EBS volume and then create another volume from the snapshot which had twice the size of old one.

*It is notable that data lost was intolerable!

Until here was easy, and I thought it is going to remain that way using resize2fs tool. But the tool simply said that there partition is already using full disk size! while there was unused space there. So after some search I found the http://litwol.com/content/fdisk-resizegrow-physical-partition-without-losing-data-linodecom article and adapt it to my case.

The difference of my case was that the server partition was DOS partition and not ex* partition. This is what I did to fix it:

List disks and write down “Start” and “Id” for /dev/xvda1 (These two piece of data are extremely important for successfully finishing the task). In my case Start was 63 and Id was 8e.

fidsk -l

And then delete the partition, and create larger a new one with exact same start point:

fdisk -c=dos -u=sectors /dev/xvdc

(The red characters are my response)

(fdisk) Command (m for help): d
(fdisk) Partition number (1-4): 1
(fdisk) Command (m for help): n
(fdisk) Partition type:
p primary
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (63-40558591, default 63): 63
Last sector, +sectors or +size{K,M,G} (63-40558591, default 40558591): 40558591
(fdisk) Command (m for help): t
Partition number (1-4): 1
Hex Code (type L to list codes): 8e
(fdisk) Command (m for help): w

Now we use resize2fs to finalize everything:

resize2fs /dev/xvdc1

At the end you can check the storage device by “lsblk” or “fdisk -l”.