Monday, December 28, 2015

AWS Cloudformation : Intrinsic Functions

AWS Cloudformation templates have Intrinsic functions to use upon on Resources such as properties and metadata attributes



String Handling Functions

Fn::Base64 - Returns the Base64 representation of the input string. This function is typically used to pass encoded data to Amazon EC2 instances by way of the UserData property.

Fn::Join - Appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter.

Functions for Managing data and variables inside the Template


Fn::FindInMap Returns the value corresponding to keys in a two-level map that is declared in the Mappings section.

Fn::GetAtt Returns the value of an attribute from a resource in the template.

Ref Returns the value of the specified parameter or resource.

Region Selection Function


Fn::GetAZs Returns an array that lists Availability Zones for a specified region.





Saturday, November 7, 2015

Chef 12 Installation in CentOS

Chef has 3 components:

  • Chef Server (192.168.56.101) - CentOS 6.4
  • Chef Workstation (192.168.56.102) - CentOS 7.0
  • Chef Client(Node) (192.168.56.103) - CentOS 6.4

Configure hostnames

In CentOS 6.4, set the hostname in /etc/sysconfig/network file as follows


HOSTNAME=chef-server    (in chef server 192.168.56.101)
HOSTNAME=chef-client     (in chef client 192.168.56.103)

In CentOS 7.0, set the hostname

hostnamectl set-hostname chef-workstation (192.168.56.102)


Then in /etc/hosts file in each of the above hosts make an entry as follows


192.168.56.101 chef-server chef-server.localdomain
192.168.56.102 chef-workstation chef-workstation chef-workstation
192.168.56.103 chef-client chef-client chef-client

Chef Server Installation


# wget https://web-dl.packagecloud.io/chef/stable/packages/el/6/chef-server-core-12.2.0-1.el6.x86_64.rpm
# rpm -ivh chef-server-core-12.2.0-1.el6.x86_64.rpm
# chef-server-ctl reconfigure
# chef-server-ctl test

# mkdir -p /etc/chef-server/

Create Admin User

Syntax for creating a chef user account
chef-server-ctl user-create user_name first_name last_name email password --filename FILE_NAME

An RSA private key is generated automatically. This is the user’s private key and should be saved to a safe location. The --filename option will save the RSA private key to a specified path. 

Iam creating a chef user "chefadmin" whose key is chefadmin.pem

chef-server-ctl user-create chefadmin ChefUser Admin chefadmin@chef-server.com 1q2w3e4r --filename /etc/chef-server/chefadmin.pem

Create an Org

Syntax for creating a chef org

chef-server-ctl org-create short_name "full_organization_name" --association_user user_name --filename ORGANIZATION-validator.pem

The --association_user option will associate the user_name with the admins security group on the Chef server.

An RSA private key is generated automatically. This is the chef-validator key and should be saved to a safe location. The --filename option will save the RSA private key to a specified path

Iam creating a org chefserver whose key is chefserver-validator.pem using the user chefadmin created earlier

chef-server-ctl org-create chefserver ChefServer --association_user chefadmin --filename /etc/chef-server/chefserver-validator.pem


Chef workstation setup


As root user run the following command

# curl -L https://www.opscode.com/chef/install.sh | bash

2) When the installation is finished enter the chef-client command to verify that the chef-client was installed:
# chef-client -v

3) Under a normal user, I will  create the “.chef” directory under the user's home directory /home/xyz/.chef, where xyz is the username

The .chef directory is used to store three files:
  • knife.rb
  • ORGANIZATION-validator.pem
  • USER.pem

The *.pem keys are the ones generated in Chef Server for User and Organization.
Need to copy those keys from Chef server to Chef Workstation

$ cd /home/xyz/

Copying the User key from Chef Server to Chef workstation
$ scp root@chef-server:/etc/chef-server/chefadmin.pem ~/.chef/

Copying the Organization key from Chef Server to Chef workstation
$ scp root@chef-server:/etc/chef-server/chefserver-validator.pem ~/.chef/

Configure knife configuration file  ~/.chef/knife.rb file
log_level                :info
log_location             STDOUT
node_name                'chefadmin'
client_key               '/home/xyz/.chef/chefadmin.pem'
validation_client_name   'chefserver-validator'      
validation_key           '/home/xyz/.chef/chefserver-validator.pem'  
chef_server_url          'https://chef-server:443/organizations/chefserver'
syntax_check_cache_path  '/home/xyz/chef-repo/.chef/syntax_check_cache'

Run knife ssl fetch to trust the server’s self-signed cert.

knife client list should now show you the name of your validator, which in this case is:
chefserver-validator

knife user list
chefadmin

BootStraping Chef Client

We will install Chef client software in chef-client machine from chef-workstation machine

Bootstrapping a node installs the chef-client and validates the node, allowing it to read from the Chef server.

1) From Chef workstation, bootstrap the chef client node by using the chef client node’s root user
   knife bootstrap <Chef Client IP> -x root -P password --node-name <nodename>

   <nodename> is optional. If not specified it will take the hosname of Chef client node as nodename

    knife bootstrap <Chef Client IP or hostname>
    $ knife bootstrap chef-client (or)  knife bootstrap 192.168.56.103

2) Confirm that the node has been bootstrapped by listing the nodes in Chef Workstation by running the command
 $ knife node list 

Reference

  • https://www.digitalocean.com/community/tutorials/how-to-create-simple-chef-cookbooks-to-manage-infrastructure-on-ubuntu
  • https://www.linode.com/docs/applications/chef/setting-up-chef-ubuntu-14-04


Tuesday, November 3, 2015

Chef 12 Workstation : Response: missing read permission

While setting up Chef Workstation, after configuring ~/.chef/knife.rb file, tried validating the Chef Workstation with Chef Server by running the command

[chef-workstation .chef]$ knife user list
ERROR: You authenticated successfully to https://chef-server:443 as chefadmin but you are not authorized for this action
Response:  missing read permission

Upon analyzing the cause, it was figured out that in the file ~/.chef/knife.rb in Chef Worsktation, the entry for chef_server_url was wrongly specified as

chef_server_url          'https://chef-server:443/'  - Wrong

From Chef 12, this should be specified as
chef_server_url          'https://chef-server:443/organizations/xxxx' - Correct

where, xxxx - Name of the Organization created in Chef Server

Thursday, October 29, 2015

Ruby Path variable $LOAD_PATH

In Ruby, to find the list of directories which are searched by load and require methods we can use the global variable $LOAD_PATH

#!/usr/bin/env ruby

p $LOAD_PATH
p $:

$: is a short synonym for $LOAD_PATH name

Wednesday, August 12, 2015

How to find boto version installed?

To find the version of boto installed, run the following program

#!/usr/bin/python
import boto

print boto.Version

When I ran the above program, I got the version of boto installed in my system as 2.32.1

Tuesday, August 11, 2015

AWS: SSH ProxyCommand to login directly to private instance

In AWS, to ssh into the private server instance, we need to first ssh into bastion host first. Only from bastion host we shall be able to login into private server instances. Hence we need to store our ssh private key into the bastion host to be able to login to private instances.

But storing ssh private key in bastion host is not a good practise. To overcome that, there are two possibilities
1) Use ssh-agent for forwarding keys through bastion host
2) Use ssh ProxyCommand

Let me explain the later option of using ssh ProxyCommand to login to AWS private instance by tunneling through bastion host.

From our localhost(desktop client), we need to
1) SSH into our bastion host
2) Run netcat command on the bastion host to open a connection to the remote host(private aws instance)
3) Connect to the remote host(private aws instance) through the netcat tunnel from the local desktop without having to store the private ssh key in the bastion host.

OpenSSH 5.4 and above have netcat built in. So in our local desktop, we need to configure ssh client configuration ~/.ssh/config as below

Host privateecinstance
     Hostname <aws_private_instance_ip>
     User ec2-user                                              #Username to ssh into private ec2 instance
     ProxyCommand ssh -W %h:%p ec2-user@<bastion-host-ip>  2> /dev/null

Now we can login to private ec2 instance from our local desktop as follows

ssh privateecinstance

Saturday, August 1, 2015

Fluentd, ElasticSearch, Kibana Installation in CentOS 7

To aggregate logs in a single place and have an integrated view of aggregated logs through a UI, people normally use ELK stack.

  • Fluentd - For aggregating logs in a single server
  • Elasticsearch - For Indexing the aggregated logs
  • Kibana - GUI for viewing the logs
I will install all three, Fluentd, Elasticsearch, Kiabana in single host

Fluentd

Logs are streams - no beginning or end. We need to send logs from all the hosts to the Elasticsearch server for indexing. For streaming logs to a centralized server, we have various tools like Fluentd, LogStash, Flume, Scribe. Here Iam using Fluentd. 

Installation

Fluentd is available as td-agent or fluentd package. Here Iam using td-agent.

Difference between fluentd and td-agent is listed in

           http://www.fluentd.org/faqs

curl -L https://td-toolbelt.herokuapp.com/sh/install-redhat-td-agent2.sh | sh

Configuration

Fluentd configuration file is /etc/td-agent/td-agent.conf

Configure Fluentd to aggregate rsyslog messages to elasticsearch as follows

# collect the dmesg output
<source>
  type syslog
  port 42185
  tag syslog
</source>

<match syslog.**> 
  type elasticsearch 
  logstash_format true         #Kibana understands only logstash format
  flush_interval 10s # for testing 
</match> 

Start fluentd(td-agent)

#Check the status of td-agent service
/etc/init.d/td-agent status

#To enable td-agent to start on boot automatically
/etc/init.d/td-agent enable

#To start the td-agent service
/etc/init.d/td-agent start

#In CentOS 7 using the systemctl command to check status, start, stop td-agent service
systemctl status td-agent
systemctl start td-agent
systemctl stop td-agent

Next we will install Elasticsearch. But Elasticsearch needs Java. So we will start with Java installation first

fluentd(td-agent) log file and pid path

Log file   : /var/log/td-agent/td-agent.log
PID path : /var/run/td-agent/td-agent.pid

Java8 Installation 

cd /opt

wget --no-cookies --no-check-certificate --header "Cookie: 
gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz" 

tar xzf jdk-8u45-linux-x64.tar.gz 

alternatives --install /usr/bin/java java /opt/jdk1.8.0_45/bin/java 2                                                           alternatives --config java                                                                                                                            alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_45/bin/jar 2                                                                 alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_45/bin/javac 2                                                     alternatives --set jar /opt/jdk1.8.0_45/bin/jar                                                                                             alternatives --set javac /opt/jdk1.8.0_45/bin/javac      

sh -c "echo export JAVA_HOME=/opt/jdk1.8.0_45 >> /etc/environment"
sh -c "echo export JRE_HOME=/opt/jdk1.8.0_45/jre >> /etc/environment"  
sh -c "echo export PATH=$PATH:/opt/jdk1.8.0_45/bin:/opt/jdk1.8.0_45/jre/bin >> /etc/environment"

cat /etc/environment
java -version

Elasticsearch

yum repo setup

cd /opt/
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
cd /etc/yum.repos.d/

vi elasticsearch.repo
   name=Elasticsearch repository for 1.6.x packages
   baseurl=http://packages.elastic.co/elasticsearch/1.6/centos
   gpgcheck=1
   gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
   enabled=1

Installation

yum install elasticsearch

#Check the status of elasticsearch service
systemctl status elasticsearch

#Enable the elasticsearch service to be started on boot
systemctl enable elasticsearch

#Start the elasticsearch service
systemctl start elasticsearch

Status Check

curl http://localhost:9200/

tail -f /var/log/elasticsearch/elasticsearch.log

Elasticsearch system configuration setting changes

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html

Elasticsearch Common Query commands

Any elasticsearch info is organized as

curl 'localhost:9200/<index>/<type>/<id>/'

#To get a list of indices
curl 'localhost:9200/_cat/indices?v'

In our case, the indices will be of type logstash-<yyyy.mm.dd>, as Fluentd is sending it in logstash format

#To check for data under a index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/_search?pretty=true'

#To query for data under a index and a type(it is "fluentd" in our case)
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?pretty=true'

#To know if there is data available for a time period in elasticsearch index
curl 'localhost:9200/logstash-<yyyy.mm.dd>/fluentd/_search?q="00:00"&pretty=true'

#Get cluster health
curl 'localhost:9200/_cluster/health'

#To get the health of an index, for example say for the index logstash-2015.07.01
curl -XGET 'http://localhost:9200/_cluster/health/logstash-2015.07.01'

Kibana 4

Installation

cd /opt

wget https://download.elastic.co/kibana/kibana/kibana-4.1.0-linux-x64.tar.gz

tar xzvf kibana-4.1.0-linux-x64.tar.gz

mv kibana-4.1.0-linux-x64 /opt/kibana4

#Enable PID file for Kibana, this is required to create a systemd init file.
sed -i 's/#pid_file/pid_file/g' /opt/kibana4/config/kibana.yml


Start/Stop

Kibana4 service can be started by running /opt/kibana4/bin/kibana

# kibana4 start/stop systemd script

 vi /etc/systemd/system/kibana4.service

[Unit]
Description=Kibana 4 Web Interface
After=elasticsearch.service
After=td-agent.service
[Service]
ExecStartPre=rm -rf /var/run/kibana.pid
ExecStart=/opt/kibana4/bin/kibana/
ExecReload=kill -9 $(cat /var/run/kibana.pid) && rm -rf /var/run/kibana.pid && /opt/kibana4/bin/kibana/
ExecStop=kill -9 $(cat /var/run/kibana.pid)
[Install]
WantedBy=multi-user.target

# Start and enable kibana to start automatically at system startup.
systemctl start kibana4.service
systemctl enable kibana4.service

Kibana Portal Access

http://<kibana_server-ip-address>:5601/

Wednesday, May 27, 2015

Private and Public IP Addresses

IP addresses can be classified into Private and Public IP addresses.

To preserve the IP address space, Private IP addresses were introduced. Private IP addresses are used on the internal network and are never advertised to the public network.

Private IP addresses range:

10.0.0.0 - 10.255.255.255
Total host addresses : 16,777,216

172.16.0.0 - 172.31.255.255
Total host addresses : 1,048,576

192.168.0.0 - 192.168.255.255
Total Host addresses : 65536

Private IP addresses go through a process of NATing if they want to communicate with public internet

Public IP addresses are those addresses which are advertised on the public network, inter-networks.

Tuesday, May 26, 2015

What would happen if we type "mv *" in the command line?

What will happen if we type the command  mv * ?

To check it, I went to folder containing the following entries

$ ls -l
total 24
-rw-rw-r-- 1 xxxx yyyy  97 Mar  2 18:51 elb
-rw-rw-r-- 1 xxxx yyyy  96 Mar  6 12:09 elb2
-rw-rw-r-- 1 xxxx yyyy 108 Mar  6 11:49 elb3
-rw-rw-r-- 1 xxxx yyyy 132 Mar  6 11:53 elb4
-rw-rw-r-- 1 xxxx yyyy 128 Mar  6 11:56 elb5
-rw-rw-r-- 1 xxxx yyyy  10 May 24 16:20 host.txt

$ mv *
mv: target `host.txt' is not a directory

To see what the above command mv * did really, I ran the following command

$ echo mv *
mv elb elb2 elb3 elb4 elb5 host.txt

So the shell basically expands * to all the file names in the directory before calling the mv command.

So the mv command sees the last argument as the destination- in this case it is host.txt. It is a file and not a directory. The reason for mv command to consider the last argument as a directory is specified in the man command for mv

DESCRIPTION
       Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY.

More info about mv command can be found from

$ info coreutils 'mv invocation'

Monday, May 25, 2015

-bash: man: command not found

When I wanted to find the man  page for a command, I was presented with the message

-bash: man: command not found

The reason is that CentOS minimal install, will not include the man pages.

So install the man  pages using

yum -y install man

How to create a file named "-f" and delete the same?

Say, I want to create a file named -f

It can be done using the option --

vi -- -f

Similarly the deletion of file -f can be done using the option -- with rm command

rm -- -f

Wednesday, May 20, 2015

Difference between TRUNCATE, DELETE and DROP commands in database

DELETE command is used to remove rows from a table. A WHERE clause can be used to only remove some rows. If no WHERE condition is specified, all rows will be removed. 

TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such, TRUCATE is faster and doesn't use as much undo space as a DELETE.

The DROP command removes a table from the database. All the tables' rows, indexes and privileges will also be removed. No DML triggers will be fired. The operation cannot be rolled back.


DROP and TRUNCATE are DDL commands, whereas DELETE is a DML command. Therefore DELETE operations can be rolled back (undone), while DROP and TRUNCATE operations cannot be rolled back.


TRUNCATE is much faster than DELETE.

Reason:When you type DELETE.all the data will get copied into the Rollback Tablespace first, then delete operation will get performed. That's why when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback Tablespace).All this process take time. 

Reference: 

Tuesday, April 28, 2015

Perl : Using awk type field matching in Perl

Suppose in the Apache log file we have an entry like the following

192.168.1.212 - - [24/Apr/2015:14:28:46 +0900] "POST / HTTP/1.0" 404 974 "-" "-" - 444

By default perl uses white space as field separator like awk
404 is the HTTP status code and it is the eighth field.

From the log file, to print only the lines with HTTP status as 404(which is the eight field)

cat /usr/local/apache/logs/access_log.20150424.gz | perl -lane 'print if @F[8] =~ /404/'

Few flags that make Perl more awk like, with field separators

-l makes each print statement output a record separator that is the same as input record separator (newline by default).
-Fpattern is used to specify input field separator, much like awk's -F option.

-a turns on the autosplit mode, so input fields are placed into @F array

A very common example is viewing fields in /etc/passwd

awk -F':' '{ print $1 }' /etc/passwd

perl -F':' -lane 'print @F[0]' /etc/passwd

Perl fields are @F[0], @F[1],...
awk fields are $1, $2, ...

To match the whole input line
awk - $0
Perl - $_

For more information refer
http://lifecs.likai.org/2008/10/using-perl-like-awk-and-sed.html




SSH : Pass Perl one-liner as argument to ssh

To pass perl one-liner as an argument to ssh command line, it can be done as follows

ssh server_ip_address << HERE zcat /usr/local/apache/logs/access_log.20150424.gz | perl -ne 'print if $_ =~ /" 404/'
HERE


Monday, April 27, 2015

Git : How to keep local repository in sync with remote repository?

To keep our  local repository in sync with the remote repository by fetching the updates from the remote repository, run the following from our local repo machine

git checkout master
git fetch origin
git merge origin/master

In Detail
=========
$ git branch -r                                                                                                            
  origin/HEAD -> origin/master
  origin/blue-feature
  origin/master

Here, "origin" refers to the remote central repo

$ git fetch origin
testuser@localhost's password:
remote: Counting objects: 5, done.                                                                                                                                      
remote: Compressing objects: 100% (3/3), done.                                                                                                                          
remote: Total 3 (delta 0), reused 0 (delta 0)                                                                                                                           
Unpacking objects: 100% (3/3), done.
From ssh://localhost/home/testuser/testgit/remote-repo
   854dc70..1d80eb3  master     -> origin/master

git fetch doesn't touch your local working tree at all. So gives you a little breathing space to decide what you want to do next. To actually bring the changes from the remote branch into your working tree, you have to do a git merge.

Before merging the remote repository into our local repository, to see the differences between our local repo and remote repo

$ git diff master origin/master                                                                                            
diff --git a/third.txt b/third.txt
index 6cfb7aa..6b22503 100644
--- a/third.txt
+++ b/third.txt
@@ -1,3 +1,5 @@
-thirs file
+this file
 saturday started work
 sunday let us see
+again sunday
+one more line

Now merge the remote repo "origin/master" into our local repo

$ git merge origin/master                                                                                                  
Updating 0b5c193..1d80eb3
Fast-forward
 third.txt |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

Check if the update from remote repo has reflected in the local repo by checking a file which has the changes. In my case, it was a file by name "third.txt"

$ cat third.txt 
this file
saturday started work
sunday let us see
again sunday
one more line

Saturday, March 14, 2015

Git: Create Local and Remote repo in same host - Linux

As part of learning Git, I wanted to set up a local and remote repo in the same machine. Since Iam a newbie, there may be mistakes or better approaches available to do the same stuff in a better way.

First I started with creating a local repo

Local Repo

$ mkdir my-first-piece-of-software
$ cd my-first-piece-of-software/
$ git init


Create a file called today.txt

$ git status

$ git add today.txt

$ git status

$ git commit "My first git push"

Let us create a new branch called blue-feature. A branch is nothing but a context(it is not a separate directory or so).

$ git checkout -b blue-feature

Create a file called third.txt

$ git branch

Switch to master branch
$ git checkout master

$ git branch

When you type "ls" command, the file third.txt shall be not seen.

Merge the branch blue-feature with the master branch

$ git branch

Now we are in the master branch
$ git merge blue-feature

Now the file third.txt shall be shown under the context  of master branch 

Create a Remote Repo(Central Repo) in the same machine


$ cd ~

Don't forget to  add .git extension

$ mkdir remote-repo.git

$ cd remote-repo.git

$ git init --bare

Come back to Local Repo in the same machine


$ cd /home/testuser/testgit/my-first-piece-of-software
$ git checkout master

$ git branch -r

$ git remote add  origin ssh://testuser@localhost/home/testuser/testgit/remote-repo.git

'git remote add' means to add a reference to the remote repository.
'origin' is a name to refer to the remote repository

$ git branch -r

$ git commit
# On branch master
nothing to commit (working directory clean)

$ git remote show origin
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is a1:2b:b6:31:4f:a9:e5:3e:b0:70:70:f7:89:30:23:b6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
testuser@localhost's password:
* remote origin
  Fetch URL: ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
  Push  URL: ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
  HEAD branch: (unknown)

Now push to publish the "commits" to the remote repository, for master branch
$ git push origin master
testuser@localhost's password:
Counting objects: 17, done.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (17/17), 1.63 KiB, done.
Total 17 (delta 1), reused 0 (delta 0)
To ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
 * [new branch]      master -> master

Show the remote repository branches
$ git branch -r
  origin/master

Show the local repository branches 
$ git branch
  blue-feature
  green-feature
* master
  red-feature

Switch to local repository branch "blue-feature"
$ git checkout blue-feature
Switched to branch 'blue-feature'

$ git branch
* blue-feature
  green-feature
  master
  red-feature

$ git remote show
origin

$ git push origin blue-feature
testuser@localhost's password:
Total 0 (delta 0), reused 0 (delta 0)
To ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
 * [new branch]      blue-feature -> blue-feature

Show remote repository branches
$ git branch -r
  origin/blue-feature
  origin/master


$ git ls-remote --heads origin
testuser@localhost's password:
0b5c193fbca660b28f92d94b8597689ae4ce1d9d        refs/heads/blue-feature
0b5c193fbca660b28f92d94b8597689ae4ce1d9d        refs/heads/master

$ git remote show origin
testuser@localhost's password:
* remote origin
  Fetch URL: ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
  Push  URL: ssh://testuser@localhost/home/testuser/testgit/remote-repo.git
  HEAD branch (remote HEAD is ambiguous, may be one of the following):
    blue-feature
    master
  Remote branches:
    blue-feature tracked
    master       tracked
  Local refs configured for 'git push':
    blue-feature pushes to blue-feature (up to date)
    master       pushes to master       (up to date)

$ git branch -r
  origin/blue-feature
  origin/master

Now Clone the remote repo


$ git clone ssh://testuser@localhost/home/testuser/testgit/remote-repo.git