Wednesday, August 12, 2015

How to find boto version installed?

To find the version of boto installed, run the following program

#!/usr/bin/python
import boto

print boto.Version

When I ran the above program, I got the version of boto installed in my system as 2.32.1

Tuesday, August 11, 2015

AWS: SSH ProxyCommand to login directly to private instance

In AWS, to ssh into the private server instance, we need to first ssh into bastion host first. Only from bastion host we shall be able to login into private server instances. Hence we need to store our ssh private key into the bastion host to be able to login to private instances.

But storing ssh private key in bastion host is not a good practise. To overcome that, there are two possibilities
1) Use ssh-agent for forwarding keys through bastion host
2) Use ssh ProxyCommand

Let me explain the later option of using ssh ProxyCommand to login to AWS private instance by tunneling through bastion host.

From our localhost(desktop client), we need to
1) SSH into our bastion host
2) Run netcat command on the bastion host to open a connection to the remote host(private aws instance)
3) Connect to the remote host(private aws instance) through the netcat tunnel from the local desktop without having to store the private ssh key in the bastion host.

OpenSSH 5.4 and above have netcat built in. So in our local desktop, we need to configure ssh client configuration ~/.ssh/config as below

Host privateecinstance
     Hostname <aws_private_instance_ip>
     User ec2-user                                              #Username to ssh into private ec2 instance
     ProxyCommand ssh -W %h:%p ec2-user@<bastion-host-ip>  2> /dev/null

Now we can login to private ec2 instance from our local desktop as follows

ssh privateecinstance

Saturday, August 1, 2015

Fluentd, ElasticSearch, Kibana Installation in CentOS 7

To aggregate logs in a single place and have an integrated view of aggregated logs through a UI, people normally use ELK stack.

  • Fluentd - For aggregating logs in a single server
  • Elasticsearch - For Indexing the aggregated logs
  • Kibana - GUI for viewing the logs
I will install all three, Fluentd, Elasticsearch, Kiabana in single host

Fluentd

Logs are streams - no beginning or end. We need to send logs from all the hosts to the Elasticsearch server for indexing. For streaming logs to a centralized server, we have various tools like Fluentd, LogStash, Flume, Scribe. Here Iam using Fluentd. 

Installation

Fluentd is available as td-agent or fluentd package. Here Iam using td-agent.

Difference between fluentd and td-agent is listed in

           http://www.fluentd.org/faqs

curl -L https://td-toolbelt.herokuapp.com/sh/install-redhat-td-agent2.sh | sh

Configuration

Fluentd configuration file is /etc/td-agent/td-agent.conf

Configure Fluentd to aggregate rsyslog messages to elasticsearch as follows

# collect the dmesg output
<source>
  type syslog
  port 42185
  tag syslog
</source>

<match syslog.**> 
  type elasticsearch 
  logstash_format true         #Kibana understands only logstash format
  flush_interval 10s # for testing 
</match> 

Start fluentd(td-agent)

#Check the status of td-agent service
/etc/init.d/td-agent status

#To enable td-agent to start on boot automatically
/etc/init.d/td-agent enable

#To start the td-agent service
/etc/init.d/td-agent start

#In CentOS 7 using the systemctl command to check status, start, stop td-agent service
systemctl status td-agent
systemctl start td-agent
systemctl stop td-agent

Next we will install Elasticsearch. But Elasticsearch needs Java. So we will start with Java installation first

fluentd(td-agent) log file and pid path

Log file   : /var/log/td-agent/td-agent.log
PID path : /var/run/td-agent/td-agent.pid

Java8 Installation 

cd /opt

wget --no-cookies --no-check-certificate --header "Cookie: 
gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz" 

tar xzf jdk-8u45-linux-x64.tar.gz 

alternatives --install /usr/bin/java java /opt/jdk1.8.0_45/bin/java 2                                                           alternatives --config java                                                                                                                            alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_45/bin/jar 2                                                                 alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_45/bin/javac 2                                                     alternatives --set jar /opt/jdk1.8.0_45/bin/jar                                                                                             alternatives --set javac /opt/jdk1.8.0_45/bin/javac      

sh -c "echo export JAVA_HOME=/opt/jdk1.8.0_45 >> /etc/environment"
sh -c "echo export JRE_HOME=/opt/jdk1.8.0_45/jre >> /etc/environment"  
sh -c "echo export PATH=$PATH:/opt/jdk1.8.0_45/bin:/opt/jdk1.8.0_45/jre/bin >> /etc/environment"

cat /etc/environment
java -version

Elasticsearch

yum repo setup

cd /opt/
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
cd /etc/yum.repos.d/

vi elasticsearch.repo
   name=Elasticsearch repository for 1.6.x packages
   baseurl=http://packages.elastic.co/elasticsearch/1.6/centos
   gpgcheck=1
   gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
   enabled=1

Installation

yum install elasticsearch

#Check the status of elasticsearch service
systemctl status elasticsearch

#Enable the elasticsearch service to be started on boot
systemctl enable elasticsearch

#Start the elasticsearch service
systemctl start elasticsearch

Status Check

curl http://localhost:9200/

tail -f /var/log/elasticsearch/elasticsearch.log

Elasticsearch system configuration setting changes

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html

Elasticsearch Common Query commands

Any elasticsearch info is organized as

curl 'localhost:9200/<index>/<type>/<id>/'

#To get a list of indices
curl 'localhost:9200/_cat/indices?v'

In our case, the indices will be of type logstash-<yyyy.mm.dd>, as Fluentd is sending it in logstash format

#To check for data under a index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/_search?pretty=true'

#To query for data under a index and a type(it is "fluentd" in our case)
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?pretty=true'

#To know if there is data available for a time period in elasticsearch index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?q="00:00"&pretty=true'

#Get cluster health
curl 'localhost:9200/_cluster/health'

#To get the health of an index, for example say for the index logstash-2015.07.01
curl -XGET 'http://localhost:9200/_cluster/health/logstash-2015.07.01'

Kibana 4

Installation

cd /opt

wget https://download.elastic.co/kibana/kibana/kibana-4.1.0-linux-x64.tar.gz

tar xzvf kibana-4.1.0-linux-x64.tar.gz

mv kibana-4.1.0-linux-x64 /opt/kibana4

#Enable PID file for Kibana, this is required to create a systemd init file.
sed -i 's/#pid_file/pid_file/g' /opt/kibana4/config/kibana.yml


Start/Stop

Kibana4 service can be started by running /opt/kibana4/bin/kibana

# kibana4 start/stop systemd script

 vi /etc/systemd/system/kibana4.service

[Unit]
Description=Kibana 4 Web Interface
After=elasticsearch.service
After=td-agent.service
[Service]
ExecStartPre=rm -rf /var/run/kibana.pid
ExecStart=/opt/kibana4/bin/kibana/
ExecReload=kill -9 $(cat /var/run/kibana.pid) && rm -rf /var/run/kibana.pid && /opt/kibana4/bin/kibana/
ExecStop=kill -9 $(cat /var/run/kibana.pid)
[Install]
WantedBy=multi-user.target

# Start and enable kibana to start automatically at system startup.
systemctl start kibana4.service
systemctl enable kibana4.service

Kibana Portal Access

http://<kibana_server-ip-address>:5601/