Join Amazon Linux to Domain Controller

I recently had a case where Linux machines were joined to a domain alongside the windows machines. Domain users and groups were accessible and configurable in Linux machines using Samaba. The good thing about domain configuration was that it simple was possible to assign domain accounts to users and they use it to login in both windows and linux machines.

There is no need to manage users on each linux machine which is a great deal for enterprises. Below I walk you through the linux setup and configuration for using Samba, Kerberos and LDAP.

I strongly suggest to backup your data before doing anything. It is a better idea if you can just use a staging machine to test new setup instead of start running commands on your production environment.

The first step is to install Samba; but since Amazon Linux does not include Samba repository we add it and then install it.

wget http://ftp.sernet.de/pub/samba/3.5/centos/5/sernet-samba.repo
mv sernet-samba.repo /etc/yum.repos.d/
yum install samba -y

And then installing Kerberos:

yum install krb5-workstation -y

Obviously the important part is always configuration! just for reference here is the list of config file you may need to change:

  1. /etc/resolve.conf: contains the name servers and you may want to add the domain nameserver too or else you will not able to join it.
  2. /etc/sysconfig/network: contains the network configuations and you may need to change localhost name to a DQDN (fully qualified domain name) like abcd.mydomain.local
  3. /etc/krb5: contains Kerberos network authentication protocol configurations and for sure you will need to update it.
  4. /etc/samba/smb.conf: contains the Samba configurations and necessary for everything to work.
  5. /etc/openldap/ldap.conf: contains LDAP (lightweight directory access protocol) configurations
  6. /etc/hosts: contains local records of hosts’ ip addresses. You might need to change some host’s ip here specially if you changed hostname.

I have copied my configurations as appendices so you can use them as a working reference.

At the end there is a last piece of configuration that needs to be done before trying to join domain. Here we enable some of the authentication configuration:

authconfig --enablekrb5 --enablewinbind --enablemkhomedir --update

Finally we can join to domain by “net ads” command as follow:

net ads join -W MYDOMAIN.LOCAL -U adadminuser -S ad.mydomain.local

Appendix 1: resolve.cong

search ap-southeast-1.compute.internal mydomain.local
nameserver 10.x.x.x
nameserver 10.1.0.2

Appendix 2: kbr5.conf

# make sure server name is capital.

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = MYDOMAIN.LOCAL
 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 forwardable = yes

[realms]
 MYDOMAIN.LOCAL = {
 }

[domain_realm]
 .mydomain.local = MYDOMAIN.LOCAL
 mydomain.local = MYDOMAIN.LOCAL

[appdefaults]
 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
 }

Appendix 3: smb.conf

[global]
workgroup = MYDOMAIN
realm = MYDOMAIN.LOCAL
security = ads
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
template homedir = /home/MYDOMAIN/%U
template shell = /bin/bash
winbind use default domain = true
winbind offline logon = false
server string = Samba Server Version %v

log file = /var/log/samba/log.%m
max log size = 50

passdb backend = tdbsam

load printers = yes
cups options = raw

[homes]
comment = Home Directories
browseable = no
writable = yes

[printers]
comment = All Printers
path = /var/spool/samba
browseable = no
guest ok = no
writable = no
printable = yes

Further info:
http://www.server-world.info/en/note?os=CentOS_7&p=samba&f=3
https://wiki.samba.org/index.php/Setup_a_Samba_AD_Member_Server

Advertisements

Installing and Configuring Open VPN access server on Amazon EC2 instance

Alright, in this post we are going to prepare an openvpn server.

*Just note that openVPN access server comes with 2 user free license only and if you have more than 2 users at the same time you need to buy license (for 99$ per year per user if I am not wrong).

Download the rpm and install it:

sudo yum install -y http://swupdate.openvpn.org/as/openvpn-as-2.0.12-CentOS6.x86_64.rpm 

Once it is installed it might launch auto configuration script, just cancel it cause my experience with installing openVPN default configurations on Amazon EC2 end up with some errors. To avoid that we need to change some configurations in auto-config script:

vi /usr/local/openvpn_as/bin/_ovpn-init

And change the configurations to following (you need to add –distro redhat in two lines cause this script can not detect destro):

/usr/local/openvpn_as/scripts/openvpnas_gen_init --distro redhat
/usr/local/openvpn_as/scripts/openvpnas_gen_init --auto --distro redhat

and finally just run the ovpn initialiser script:

sudo /usr/local/openvpn_as/bin/ovpn-init --ec2 --verbose

and just keep following the wizard like prompts (I know its not windows!!). By default this script will add openvpn user with the password you define in the wizard!
You can later simply login to access server using https://ovpn.yourdomain.com/admin/ for administration.

openvpn

Once you are in admin panel go to “Server Network Settings” and “User Permissions” to change default configurations or add/edit users.

Basics of using Elasticsearch

To see the available indices:

curl 'localhost:9200/_cat/indices?v'

Some more example for seeing Logstash indices:

curl -XGET 'http://localhost:9200/_mapping'
curl -XGET 'http://localhost:9200/logstash-*/log/_mapping'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/_search?q=_type:logs&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/_search?q=type:syslog&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.08/syslog/_search?q=type:syslog&pretty=1'
curl -XGET 'http://localhost:9200/logstash-2015.01.12/loadbalancer/_search?q=tags:_grokparsefailure&pretty=1'

Deleting indices by name or based on query:

curl -XDELETE 'http://localhost:9200/index_name/'
curl -XDELETE 'http://localhost:9200/logstash-*/_query?q=facility_label:user-level&pretty=1'
curl -XDELETE 'http://localhost:9200/logstash-2015.01.12/loadbalancer/_query?q=tags:_grokparsefailure&pretty=1'

Backing up ES: We need a directory with full access to elasticsearch. I actually not sure if this is necessary or not but this is one of the actions I took to solve a bad problem!

sudo mkdir /usr/share/elasticsearch/backup/
sudo chmod 777 /usr/share/elasticsearch/backup/
sudo chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/backup/

And finally make a snapshot:

curl -XPUT 'http://localhost:9200/_snapshot/dopey_backup' -d '{
    "type": "fs",
    "settings": {
        "compress" : true,
        "location": "/usr/share/elasticsearch/backup"
    }
}'

Restoring the snapshot is also essential:

curl -XPUT "localhost:9200/_snapshot/dopey_backup/snapshot_1?wait_for_completion=true"
curl -XPOST "localhost:9200/_snapshot/dopey_backup/snapshot_1/_restore"

How I configured ELK (Elasticsearch, Logstash, Kibanna) central log management on Amazon Linux (EC2)

Well, my previous experience deploying ELK was not working on Amazon Linux for some reason!

Here is what I managed to do for making it work. It is notable that the versions used are very important as using any other version might cause components not working together!

Of course downloading these packages from official elasticsearch.org:

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.noarch.rpm
wget https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.2-1_2c0f5a1.noarch.rpm
wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.0-beta3.tar.gz

And installing them plus some cleaning up:

sudo yum install elasticsearch-1.4.1.noarch.rpm -y
sudo yum install logstash-1.4.2-1_2c0f5a1.noarch.rpm -y
tar xvf kibana-4.0.0-beta3.tar.gz
sudo mkdir /opt/kibana/
sudo mv kibana-4.0.0-beta3/* /opt/kibana/
rm -rf kibana-4.0.0-beta3

Next, we need to configure each component:

Lets start with Elasticsearch (ES) as the engine who store/retrieve our data. We specify a name for cluster and the host that ES will accept traffic. We also define the allowed sources.

sudo vi /etc/elasticsearch/elasticsearch.yml

Assumption: the local ip address is 192.168.1.2

cluster.name: myCompany
network.host: localhost
http.cors.enabled: true
http.cors.allow-origin: http://192.168.1.2

The main configuration part belongs to Logstash since we need to specify how, what and where the logs are going to be handled and processed. In this case we enable syslog logs to be recived in port 5000. Additionally I need to obtain some S3 logs (load balancer logs).

Basically Logstash configuration has 3 main parts. input, filter and output. In this case input is from S3 and rsyslog. Filter section require some pattern or module to be able to analyze the revived data (grok is the best fit for our case but there might be some ready-made module by the time you read this post). And the last part is output which is defined to ES (as the results will be sent to ES for being stored).

sudo vi /etc/logstash/conf.d/logstash.yml
    input {
        s3 {
            type => "loadbalancer"
            bucket => "myS3bucket"
            credentials => ["AKIAJAAAAAARL3SPBLFA", "Qx9gaaaa/59CMmPsCAAAAI7Hs8di7Eaaaar9SZo1"]
            region_endpoint => "ap-southeast-1"
        }
        syslog {
            type => syslog
            port => 5000
        }
    }
    filter {
        if [severity_label] == "Informational" {drop {}}
        if [facility_label] == "user-level" {drop {}}
        if [program] == "anacron" and [severity_label] == "notice" {drop {}}
        if [type] == "loadbalancer" {
            grok {
                match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}" ]
            }
            date {
                match => [ "timestamp", "ISO8601" ]
            }
        }
        if [type] == "syslog" {
            grok {
                type => "syslog"
                pattern => [ "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{PROG:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" ]
                add_field => [ "received_at", "%{@timestamp}" ]
                add_field => [ "received_from", "%{@source_host}" ]
            }
            syslog_pri {
                type => "syslog"
            }

            date {
                type => "syslog"
                match => ["syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss"]
            }

            mutate {
                type => "syslog"
                exclude_tags => "_grokparsefailure"
                replace => [ "@source_host", "%{syslog_hostname}" ]
                replace => [ "@message", "%{syslog_message}" ]
            }
            mutate {
                type => "syslog"
                remove => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
            }
        }
    }
    output {
        elasticsearch {
            host => "localhost"
            protocol => "http"
        }
    }

And finally the Kibana. We will define the network and port number we want Kibana to listen on:

sudo vi /opt/kibana/config/kibana.yml
port: 8080
host: "192.168.1.2"

And finally, running them:

sudo service elasticsearch start
sudo service logstash start
screen -d -m /opt/kibana/bin/kibana

—————

Please make sure you have configured rsyslog on other servers to send their logs to logstash by simply doing the following:

sudo vi /etc/rsyslog.conf

Add the following:

*.* @@logs.internal.albaloo.com:5000

And restart the syslog service:

sudo service rsyslog restart