Installing and using Solr 6.1 cloud in AWS EC2 (with some note on upgrading)

Like any company, we also have some legacy codes. Our codes were using Solr 3 and I was going to upgrade it to the latest (6.1). The upgrading itself is not such a big deal, just fire up a new setup convert the old schema type to the new schema type which only differs in XML formats. I am not going through that as you can easily get sample schema format from latest version and just compare it to your schema. Once done you can start the new solr with your old schema and it will start giving errors!! but with patience and hard work you can resolve them one by one.

Anyway, the upgrade process is not such a big deal but working with new solr is. Specially if you want to use the cloud version which uses zookeeper to manage the configs, shards, replications, leaders and etc. All you might come on your way is some depreciated class or missing class which you can download.

In my case I found this page very useful to find the deprecated classes of Solr 3.6.

Before I jump on Solr cloud 6.1 you may need to know some concepts:

  1. Collection: A single search index.
  2. Shard: A logical section of a single collection (also called Slice). Sometimes people will talk about “Shard” in a physical sense (a manifestation of a logical shard). Shard is literally the parts of your data. It means if you have 3 shards then all your data (documents) are distributed in 3 parts. It also means if one of the shards is missing then you are in trouble!!
  3. Replica: A physical manifestation of a logical Shard, implemented as a single Lucene index on a SolrCore. Replica is the replication of the shards! so if you have replication factor of 2 then you will have 2 copy of each shard.
  4. Leader: One Replica of every Shard will be designated as a Leader to coordinate indexing for that Shard. Leader is the master node in a shard. So if you have to replicas, then the master one is the boss!
  5. SolrCore: Encapsulates a single physical index. One or more make up logical shards (or slices) which make up a collection.
  6. Node: A single instance of Solr. A single Solr instance can have multiple SolrCores that can be part of any number of collections.
  7. Cluster: All of the nodes you are using to host SolrCores.

In continue, I will go through installing and using this whole setup.

Continue reading

Advertisements

WPA/WPA2 Cracking with GPU in AWS

DISCLAIMER: The information provided on this post is to be used for educational purposes only. The website creator is in no way responsible for any misuse of the information provided. All of the information in this website is meant to help the reader develop a hacker defence attitude in order to prevent the attacks discussed. In no way should you use the information to cause any kind of damage directly or indirectly. You implement the information given at your own risk.

In this post we are going to realise how practical it is to perform a brute force attack on a WPA or WPA2 captured handshake. A couple of years ago WPA/WPA2 considered secure but with the current power and cost of cloud computing anyone with slightest interest can setup a super fast server for brute force attempts with very cheap price (as low as $0.6 per hour!).

I am going to walk through my experiment and share the details and results with you. There are dozens of tutorials for this out there but this is just my own little experiment.

Brute forcing a WPA or WPA2 password begins with capturing the 4way handshake of the target WiFi. I am not going to go there as you can find a lot of solutions for that! I can only mention Kali toolbox which provides you the tools. So we will assume you got the WPA 4way handshake in handshake.cap file.

Continue reading

Free secure backup in MEGA

http://mega.nz/ is a secured cloud storage that gives away up to 50GB free space. Using this service is recommended due to very tight security as even the user will not be able to gain access to data if he lose the password (and lose the recovery key).

 

Mega provided some scripts for uploading, syncing and etc to their cloud. This is specially useful when it comes to cheap secure backup of your files. All you need to do is creating a free account for beginning and perhaps purchase a premium account for a better service.

First it is just appropriate to setup proper locale variables:

localedef -i en_US -f UTF-8 en_US.UTF-8

And then setup the dependencies (Amazon Linux):

yum groupinstall 'Development Tools'
yum install glib* openssl-devel libcurl-devel libproxy gnutls
rpm -i ftp://rpmfind.net/linux/centos/6.7/os/x86_64/Packages/glib-networking-2.28.6.1-2.2.el6.x86_64.rpm
yum update kernel
yum update

OK! So now we have all the required libraries in place, we can proceed to megatools installation:

wget https://megatools.megous.com//builds/megatools-1.9.95.tar.gz
tar xvf megatools-1.9.95.tar.gz 
cd megatools-1.9.95
./configure
make

In case you don’t want to pass the account username password with each command we can just save it PLAINTEXT in a file (it is safer from some aspects but it has the risk of unauthorised access to this file)

vi /root/.megarc:
[Login]
Username = your-email
Password = your-password

Just to test, run mega disk free script which gives you some info about space you have on the cloud:
./megadf

You can find some more commands details @https://megatools.megous.com/man/megatools.html

And at the end some examples:
./megaput file_to_upload
./megaget /Root/file_to_download

Monitoring SMTP server (using mailgraph, qshape and postqueue

Monitoring SMTP mails never been easier! You can check the number of sent emails, bounced emails, rejected emails and etc (you can see demo at http://www.stat.ee.ethz.ch/mailgraph.cgi).

Let’s get the source:

wget http://mailgraph.schweikert.ch/pub/mailgraph-1.14.tar.gz
tar xf mailgraph-1.14.tar.gz
mv mailgraph-1.14 /var/mailgraph

and install dependencies:

yum install perl-rrdtool
yum install perl-File-Tail

and run the mailgraph once to fetch the previous logs:

./mailgraph.pl -l /var/log/maillog -c -v

then run it as service:

cp mailgraph.pl /usr/local/bin/
./mailgraph-init start

Now the cgi file is needed to be executed by web server so once we installed the web server we will configure CGI configs. The main changes are AddHandler, Add ExecCGI to “/var/mailgraph” directory and add mailgraph.cgi as an index file like index.php.

yum install httpd
cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.original
vi /etc/httpd/conf/httpd.conf (appendix 01)
service http start
chkconfig httpd on

Now you should be able to browse the web server and see the stats. I have to note that the stats are not getting updated frequently. I coolant figure out how frequent this is getting updated but sometimes I had to run the mailgraph script manually to get latest stats. At this moment we are done with mailgraph.

Now we just take a look at shape and other CLI tools for monitoring SMTP. qshape and post queue are two useful scripts found in postfix additional scripts. We need to install it first:

yum install postfix-perl-scripts

And here are some example of the tool to get the stat of different queues:

qshape hold
qshape deferred
qshape active

postqueue is another tool:

postqueue -p
postqueue -p | egrep -c "^[0-9A-F]{10}[*]"
postqueue -p | egrep -c "^[0-9A-F]{10}[^*]"

we also can use the maillot directly with assistance of grep:

grep -c "postfix/smtp.*status=sent" /var/log/maillog 
grep -c "postfix/smtp.*status=bounced" /var/log/maillog

Continue reading

The Linux Cheatsheet

FIX CHANGE LOCALE ERROR:
localedef -i en_US -f UTF-8 en_US.UTF-8
vi /etc/profile.d/locale.sh
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_COLLATE=C
export LC_CTYPE=en_US.UTF-8
source /etc/profile.d/locale.sh

MANUAL LOG ROTATION
touch maillog2; chmod `stat -c %a maillog` maillog2; chown `stat -c %u maillog` maillog2;
mv maillog maillog-$(date +%Y%m%d); mv maillog2 maillog; service rsyslog restart;
screen -d -m bzip2 maillog-$(date +%Y%m%d);

CHANGE HOSTNAME (CENTOS7):
hostnamectl set-hostname voyager123

REMOVE YUM REPO:
yum-config-manager –disable repository-id-or-name

USING ANTI VIRUS SOLUTION:
yum –enablerepo=epel install clamav
freshclam
clamscan –infected –remove –recursive /home

TO MOVE ALL OLD FILES INTO A DIRECTORY:
mv `ls -1tr | head -n 10` archive/

USING CLI and CURL TO GET PUBLIC IP:
curl http://ip4.me 2>/dev/null | sed -e ‘s#]*>##g’ | grep ‘^[0-9]’

USING CLI AND CURL TO GET EC2 INFO:
curl http://169.254.169.254/latest/meta-data/

FIND TOP 10 LARGEST DIRECTORIES:
find . -type d -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {}

MONITORING TOP 5 PROCESS:
ps -eo pid,pcpu,args –no-headers | sort -nk 2 | head -n 5
ps -Ao pid,pcpu,args –sort=-pcpu | head -n 5
top -b -n 1 | head -n 12 | tail -n 5

MONITOR SPECEFIC PROCESS:
top -b -n 10 | grep something

FIND WHICH PACKAGE PROVIDE A FILE OR FEATURE:
yum provides ‘*File/Tail.pm’

PREPARE FTP AND USERS:
yum install vsftpd
vi /etc/vsftpd/vsftpd.conf
local_enable=YES
anonymous_enable=NO
service vsftpd restart
groupadd ftpusers
adduser -g ftpusers -d /var/www/html/ws1 -s /sbin/nologin -c commentsgoeshere -p password ftp.user1
chown -R apache:ftp.user1 /var/www/html/ws1
chmod 775 -R /var/www/html/ws1

INSTALL AND ENABLE EPEL REPOSITORY:
rpm -Uvh http://epel.mirror.net.in/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum –disablerepo=* –enablerepo=epel list available

INSTALL RPMFORGE REPOSITORY:
rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.i686.rpm

INSTALL RAR AND 7ZIP:
yum install rar unrar p7zip -y

CHECK A WEBSITE REDIRECTIONS AND HEADERS ONLY:
curl -L -s -D – somedomain.com -o /dev/null

DOWNLOAD FROM FTP USING CURL:
curl -u ‘username’:’password’ ‘ftp://ftp-server/file.zip’ -o local_file.zip

FINDS ALL FILES CONTAINING A PATTERN:
grep -rnw ‘directory’ -e “pattern”

DOWNLOAD ALL CONTENTS OF THE FTP USING WGET
wget -r -l 0 –user=”username” –ask-password ftp://domain.com/public_html/

COPY FILE FROM LOCAL TO SERVER PORT 2222:
scp -p 2222 -i /home/user/key.pem /home/user/backup.sql username@server:/opt/mysql/backup.sql

COPY FILE FROM SERVER PORT 2222 TO LOCAL:
scp -p 2222 -i /home/user/key.pem username@server:/opt/mysql/backup.sql /home/user/backup.sql

SYNC FILES FROM LOCAL TO SERVER IN PORT 2222:
rsync -azPvrltgo -e ‘ssh -p 2222’ /var/mail/ root@server:/var/mail/

SYNC FILES FROM SERVER TO LOCAL IN PORT 2222:
rsync -azPvrltgo -e ‘ssh -p 2222′ root@server:/var/mail/ /var/mail/

RESET MYSQL ROOT PASSWORD:
sudo service mysqld stop
sudo mysqld_safe –skip-grant-tables &
mysql -u root
use mysql;
update user set password=PASSWORD(“newpassword”) where User=’root’;
flush privileges; //not sure the effects
quit
sudo service mysqld restart

REPLACE STRING IN ALL FILES WITH SPECEFIC PATTERN (IN THE CURRENT DIR):
find . -type f -name “*.*” -exec sed -i ‘s/old-text/new-test/g’ {} +
find . -type f -name “db-*-data-config.xml” -exec sed -i ‘s/old-text/new-text/g’ {} +
find . -type f -name “solrconfig.xml” -exec sed -i ‘s/old-text/new-text/g’ {} +

SHOW CONTENTS OF ALL THE FILES THAT FOUND:
find . -type f -name “db*data-config.xml” -exec cat {} \;

MAKE BACKUP OF SPECIFIC FOLDERS:
find . -type d -name “conf” -exec cp –parents -a {} /root/conf_backup/ \;

Continue reading

Visualising logs in Kibana

In previous posts (here and here) I have discussed installation of ELK (elasticsearch, logstash, kibana).

I myself have been using this system happily! Cause I did not have to go through all the raw logs on different servers. I could go though the log records, search and sort different logs from different servers all in one place.

The problem arises when the number logs hits a million. In my case I had to deal with more than 30+ millions log records each week, which start to become a disaster after a couple of weeks.

The solution is to realise what you need to find from logs and use Kibana features to do that. In this post I come up with two case scenarios. One case is to find crawlers! The other case is to find daily raw traffic (by the number of requests).

So here we go, first you need to have the basic setup for Kibana meaning at least the Discover tab should be working.

To create the visualisation for daily traffic, just go Visualise tab. Create a visualisation from a new search and then select vertical bar chart.

Now you need to add matrices. For Y axis choose “Unique Count” aggregation and choose “client_ip” for field.
Then add an X axis with “Date Histogram” aggregation, choose the timestamp for field and select an interval (i.e. Daily). Click on apply and save if it is ok.

visitor count

To create the visualisation for finding crawlers, just go Visualise tab. Create a visualisation from a new search and then select data table.

Select “Count” aggregation as Metric. Then Add “Split Rows” aggregation. Choose “Terms” as aggregation, “client_ip” as field and finally number of results (size) plus sorting option (Order).

crawlers

 

 

Adding Network Address Translation (NAT) to Amazon private VPC

Assuming that the VPC is ready and there is one public subnet and one private subnet.

Just add an instance (I used Amazon Linux) in public subnet and all incoming/outgoing traffic. It is important to disable source/destination check on that instance (right click on the EC2 instance and you will see it).

Next you need to SSH into the NAT server and run the following commands:

sudo sysctl -q -w net.ipv4.ip_forward=1 net.ipv4.conf.eth0.send_redirects=0
sudo iptables -t nat -A POSTROUTING -o eth0 -s 10.0.0.0/16 -j MASQUERADE

And then test it they are all set:

sudo iptables -n -t nat -L POSTROUTING
sysctl net.ipv4.ip_forward
sysctl net.ipv4.conf.eth0.send_redirects

In the end, go back to AWS console. Go to VPC service and select the route table that is associated with the private network. Then change the default route (0.0.0.0/0) to the NAT instance.

Now you are good to go! All instances in your private subnet have internet access now.