To aggregate logs in a single place and have an integrated view of aggregated logs through a UI, people normally use ELK stack.
- Fluentd - For aggregating logs in a single server
- Elasticsearch - For Indexing the aggregated logs
- Kibana - GUI for viewing the logs
I will install all three, Fluentd, Elasticsearch, Kiabana in single host
Fluentd
Logs are streams - no beginning or end. We need to send logs from all the hosts to the Elasticsearch server for indexing. For streaming logs to a centralized server, we have various tools like Fluentd, LogStash, Flume, Scribe. Here Iam using Fluentd.
Installation
Fluentd is available as td-agent or fluentd package. Here Iam using td-agent.
Difference between fluentd and td-agent is listed in
http://www.fluentd.org/faqs
curl -L https://td-toolbelt.herokuapp.com/sh/install-redhat-td-agent2.sh | sh
Configuration
Fluentd configuration file is /etc/td-agent/td-agent.conf
Configure Fluentd to aggregate rsyslog messages to elasticsearch as follows
# collect the dmesg output
<source>
type syslog
port 42185
tag syslog
</source>
<match syslog.**>
type elasticsearch
logstash_format true #Kibana understands only logstash format
flush_interval 10s # for testing
</match>
Start fluentd(td-agent)
#Check the status of td-agent service
/etc/init.d/td-agent status
#To enable td-agent to start on boot automatically
/etc/init.d/td-agent enable
#To start the td-agent service
/etc/init.d/td-agent start
#In CentOS 7 using the systemctl command to check status, start, stop td-agent service
systemctl status td-agent
systemctl start td-agent
systemctl stop td-agent
Next we will install Elasticsearch. But Elasticsearch needs Java. So we will start with Java installation first
fluentd(td-agent) log file and pid path
Log file : /var/log/td-agent/td-agent.log
PID path : /var/run/td-agent/td-agent.pid
Java8 Installation
cd /opt
wget --no-cookies --no-check-certificate --header "Cookie:
gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz"
tar xzf jdk-8u45-linux-x64.tar.gz
alternatives --install /usr/bin/java java /opt/jdk1.8.0_45/bin/java 2 alternatives --config java alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_45/bin/jar 2 alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_45/bin/javac 2 alternatives --set jar /opt/jdk1.8.0_45/bin/jar alternatives --set javac /opt/jdk1.8.0_45/bin/javac
sh -c "echo export JAVA_HOME=/opt/jdk1.8.0_45 >> /etc/environment"
sh -c "echo export JRE_HOME=/opt/jdk1.8.0_45/jre >> /etc/environment"
sh -c "echo export PATH=$PATH:/opt/jdk1.8.0_45/bin:/opt/jdk1.8.0_45/jre/bin >> /etc/environment"
cat /etc/environment
java -version
Elasticsearch
yum repo setup
cd /opt/
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
cd /etc/yum.repos.d/
vi elasticsearch.repo
name=Elasticsearch repository for 1.6.x packages
baseurl=http://packages.elastic.co/elasticsearch/1.6/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
Installation
yum install elasticsearch
#Check the status of elasticsearch service
systemctl status elasticsearch
#Enable the elasticsearch service to be started on boot
systemctl enable elasticsearch
#Start the elasticsearch service
systemctl start elasticsearch
Status Check
curl http://localhost:9200/
tail -f /var/log/elasticsearch/elasticsearch.log
Elasticsearch system configuration setting changes
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html
Elasticsearch Common Query commands
Any elasticsearch info is organized as
curl 'localhost:9200/<index>/<type>/<id>/'
#To get a list of indices
curl 'localhost:9200/_cat/indices?v'
In our case, the indices will be of type logstash-<yyyy.mm.dd>, as Fluentd is sending it in logstash format
#To check for data under a index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/_search?pretty=true'
#To query for data under a index and a type(it is "fluentd" in our case)
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?pretty=true'
#To know if there is data available for a time period in elasticsearch index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?q="00:00"&pretty=true'
#Get cluster health
curl 'localhost:9200/_cluster/health'
#To get the health of an index, for example say for the index logstash-2015.07.01
curl -XGET 'http://localhost:9200/_cluster/health/logstash-2015.07.01'
Kibana 4
Installation
cd /opt
wget https://download.elastic.co/kibana/kibana/kibana-4.1.0-linux-x64.tar.gz
tar xzvf kibana-4.1.0-linux-x64.tar.gz
mv kibana-4.1.0-linux-x64 /opt/kibana4
#Enable PID file for Kibana, this is required to create a systemd init file.
sed -i 's/#pid_file/pid_file/g' /opt/kibana4/config/kibana.yml
Start/Stop
Kibana4 service can be started by running /opt/kibana4/bin/kibana,
# kibana4 start/stop systemd script
vi /etc/systemd/system/kibana4.service
[Unit]
Description=Kibana 4 Web Interface
After=elasticsearch.service
After=td-agent.service
[Service]
ExecStartPre=rm -rf /var/run/kibana.pid
ExecStart=/opt/kibana4/bin/kibana/
ExecReload=kill -9 $(cat /var/run/kibana.pid) && rm -rf /var/run/kibana.pid && /opt/kibana4/bin/kibana/
ExecStop=kill -9 $(cat /var/run/kibana.pid)
[Install]
WantedBy=multi-user.target
# Start and enable kibana to start automatically at system startup.
systemctl start kibana4.service
systemctl enable kibana4.service
Kibana Portal Access
http://<kibana_server-ip-address>:5601/