Saturday, July 21, 2012

SNMP


SNMP (Simple Network Management Protocol) is an application layer protocol on TCP/IP network that facilitates the exchange of management information between network devices. SNMP is an industry standard way to gather information from hosts across a network.

SNMP works on a standard client/server model. The SNMP client/or agent, sends a request across the network to a host running SNMP server. The SNMP server, snmpd, then gathers the information from the local system and returns it to the client/agent. Each SNMP server has a list of information(or objects) it can extract from the local computer. This list of information is arranged in a hierarchical system called, Management Information Base(MIB) tree. So basically, the SNMP agent/client accesses the MIB on the server to fetch an object. An object corresponds to a particular information about the device(server) - say for example, swap usage, uptime and so on. An SNMP agent can also send a request to make changes to the SNMP server.

The objects in a server are fetched, managed by client/agent using the following four fundamental operations - Get, GetNext, Set, Trap
  1. The client or agent, sends a Get command to fetch an object from the host running snmpd server. This object shall be leaf node in the MIB
  2. The client or agent, can recursively retrieve objects from the host running snmpd server, using GetNext command. The GetNext operates on a branch in MIB
  3. The client or agent, can control the server by using the Set command to change the value of an object
  4. If the server needs to notify the client/agent of some event, server can issue an Trap command to pass the message to the client/agent.
A system can function as a client/agent or a server or both.

SNMP can be used in 3 versions - v1, v2, v3

SNMP v1 relies on a simple string , called the community name to provide security. Two community names are used - public and private. In a default configuration, the public community is used to provide read access to a managed device while the private community is used to allow read-write access. All information exchanged between the client/agent and server are sent in clear text. In case if we are using SNMP v1, as a simple security measure, use alternative names for public and private.

SNMP v2 has some inconsistent implementations. So skipping it.

SNMP v3 provides three very important security features:
  1. Usernames to audit SNMP connections made to server
  2. Passwords to allow authenticated access to server
  3. Encryptions allows data to be exchanged between server and client/agent securely

Management Information Base(MIB)

Each object in a MIB tree is identified by a name or number. Since, it is a tree structure(like unix), each object in MIB tree can be reached by following a unique path for that object(same like unix path).


Consider the following example of an MIB object. MIB object name, is it's path in the MIB tree, like follows

  • interfaces.ifTable.ifEntry.ifOutErrors.1
  • interfaces, means that we are looking at the network interfaces on the system(network cards, parallel ports, and so on).
  • ifTables, is the interface table  or the list of interfaces on the system
  • ifEntry, shows one particular interface
  • ifOutErrors, means that we are looking at the outbound errors on this particular interface
  • 1, means that we are interested in interface number 1.
MIB objects can also be expressed as numbers and the preceding example can be translated to numbers as follows
  •  .1.3.6.1.2.1.2.2.1.20.1
This is also called as arc.

Expressed as words, the MIB object is specified as five terms separated by periods. Expressed as numbers, the MIB has 11. So why it is like this?
The numerical MIB is longer because it includes the default .1.3.6.1.2.1 (.iso.org.dod.internet.mgmt.mib-2). Almost all MIBs,will have this leading string.

Setting up snmp client

Check if net-snmp-utils package is installed

# yum list all | grep snmp

If not installed, then install
# yum search snmp
# yum install net-snmp-utils

Setting up snmp server

yum install net-snmp
yum install net-snmp-utils

Finding MIB object

  • Tree Browse
     snmptranslate -TB '.*memory.*'
  • Output numerical
snmptranslate -On HOST-RESOURCES-MIB::hrMemorySize

  • Tree print with Output Full
snmptranslate -Tp -Of .1.3.6.1.2.1.25

Using SNMP v1 for queries

From my host 192.168.1.3, suppose I want to query the objects on 192.168.2.101, using the "public" string of snmp v1

Using snmpwalk command, I shall use the "public" string of snmp v1 to get a list of objects I can access on 192.168.2.101

# snmpwalk -v1 -c public 192.168.2.101

Instead of using community name as "public", you can use your own community string name, say "myu" for example. To set your own community string in 192.168.2.101, do the following
  1. Open /etc/snmp/snmpd.conf in 192.168.2.101
  2. Add the following line at end of the file -  rocommunity myu
  3. service snmpd reload
Now obtain the contents of the MIB tree in 192.168.2.101 using the community string "myu", by running the following command from any host(here, 192.168.1.3)
 # snmpwalk -v1 -c myu 192.168.2.101

To get uptime of the host, 192.168.2.101, run the following
# snmpget -v1 -c myu 192.168.2.101  HOST-RESOURCES-MIB::hrSystemUptime.0
HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (272328) 0:45:23.28

To get the numerical format of HOST-RESOURCES-MIB::hrSystemUptime.0
# snmptranslate -On HOST-RESOURCES-MIB::hrSystemUptime.0
.1.3.6.1.2.1.25.1.1.0

# snmpget -v1 -c myu 192.168.2.101 .1.3.6.1.2.1.25.1.1.0
HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (59650) 0:09:56.50

To avoid typing "-v1 -c myu" for each command, do the following in the client/agent node(here, 192.168.1.3) :
 Create ~/.snmp/snmp.conf file with the following lines
   
 defVersion    1
 defCommunity  myu

Now run snmpwalk or snmpget command without "-v 1 -c public" from the client node(192.168.1.3), as follows

$ snmpwalk 192.168.2.101

Using snmp v3 for queries

SNMP v3 uses user based security model(usm) to control access to the managed device MIB. A username(securityname) must be provided as part of the SNMP request along with a password associatd with that username. On CentOS, the username and password for a SNMP v3 user is kept in file /var/lib/net-snmp/snmpd.conf. Information stored in this account shall be encrypted so that it is readable only by the snmpd dameon. Passwords can be stored as either SHA or MD5 hashes. The password supplied must be greater than 8 characters.

SNMP v3 allows one to choose any combination of hashed authentication(auth) or encrypted data privacy(priv)
  1. If just authentication alone, then auth alone can be chosen
  2. if privacy alone needed, then choose priv
  3. If both auth and privacy are needed choose authpriv 
In the machine where snmpd server is running(say, 192.168.2.101), create a SNMP user, say named "snmpusr" with password and shared secret as "snmppasswd", using command net-snmp-create-v3-user
We use md5 hash for the password and DES for data encryption.
snmpd service must be stopped before creating the user.

  1. service snmpd stop
  2. net-snmp-create-v3-user -A snmppasswd -X snmppasswd -a MD5 -x DES snmpusr
  3. adding the following line to /var/lib/net-snmp/snmpd.conf:
  • createUser snmpusr MD5 "snmppasswd" DES snmppasswd
adding the following line to /etc/snmp/snmpd.conf:
   rwuser snmpusr

   4. service snmpd start

Now let us try doing a snmpwalk for SNMP v3 users from the client machine(client can run in the same machine as server too)

# snmpwalk -v 3 -u snmpusr -l authpriv -a MD5 -x DES -A snmppasswd -X snmppasswd 192.168.2.101

In order to make the above command simple, update ~/.snmp/snmp.conf in client as follows

defVersion  3
defSecurityName  snmpusr
defSecurityLevel authpriv
defauthpassphrase snmppasswd
defauthpasswd  snmppasswd
defauthtype  MD5
defprivtype  DES

MIB paths of interest for query

CPU  Statistics

        Load
               1 minute Load: .1.3.6.1.4.1.2021.10.1.3.1
               5 minute Load: .1.3.6.1.4.1.2021.10.1.3.2
               15 minute Load: .1.3.6.1.4.1.2021.10.1.3.3

       CPU
               percentage of user CPU time:    .1.3.6.1.4.1.2021.11.9.0
               raw user cpu time:                  .1.3.6.1.4.1.2021.11.50.0
               percentages of system CPU time: .1.3.6.1.4.1.2021.11.10.0
               raw system cpu time:              .1.3.6.1.4.1.2021.11.52.0
               percentages of idle CPU time:   .1.3.6.1.4.1.2021.11.11.0
               raw idle cpu time:                   .1.3.6.1.4.1.2021.11.53.0
               raw nice cpu time:                  .1.3.6.1.4.1.2021.11.51.0

Memory Statistics

               Total Swap Size:                .1.3.6.1.4.1.2021.4.3.0
               Available Swap Space:         .1.3.6.1.4.1.2021.4.4.0
               Total RAM in machine:          .1.3.6.1.4.1.2021.4.5.0
               Total RAM used:                  .1.3.6.1.4.1.2021.4.6.0
               Total RAM Free:                   .1.3.6.1.4.1.2021.4.11.0
               Total RAM Shared:                .1.3.6.1.4.1.2021.4.13.0
               Total RAM Buffered:              .1.3.6.1.4.1.2021.4.14.0
               Total Cached Memory:           .1.3.6.1.4.1.2021.4.15.0

Disk Statistics

       The snmpd.conf needs to be edited. Add the following (assuming a machine with a single '/' partition):

                               disk    /       100000  (or)

                               includeAllDisks 10% for all partitions and disks

       The OIDs are as follows

               Path where the disk is mounted:                 .1.3.6.1.4.1.2021.9.1.2.1
               Path of the device for the partition:            .1.3.6.1.4.1.2021.9.1.3.1
               Total size of the disk/partion (kBytes):        .1.3.6.1.4.1.2021.9.1.6.1
               Available space on the disk:                      .1.3.6.1.4.1.2021.9.1.7.1
               Used space on the disk:                           .1.3.6.1.4.1.2021.9.1.8.1
               Percentage of space used on disk:             .1.3.6.1.4.1.2021.9.1.9.1
               Percentage of inodes used on disk:            .1.3.6.1.4.1.2021.9.1.10.1

OpenVZ HowTo

For installing openvz , first an OS need to be installed. So CentOS is first installed on bare machine, which is followed by Openvz installation

OpenVZ setup involves the basic two steps
  1. Installation of Openvz kernel and booting into the kernel
  2. Downloading of precreated OS template distribution of your choice(CentOS, Debian, Ubuntu etc) and installing them.

OpenVZ kernel installation

Get the OpenVZ repository. The repository has the openvz kernels
# cd /etc/yum.repos.d/
# wget http://download.openvz.org/openvz.repo
# rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
# yum update

# yum search vzkernel
======N/S Matched: vzkernel =============
vzkernel.i686 : The Linux kernel
vzkernel.x86_64 : The Linux kernel
vzkernel-devel.i686 : Development package for building kernel modules to match
                    : the kernel
vzkernel-devel.x86_64 : Development package for building kernel modules to match
                      : the kernel
vzkernel-firmware.noarch : Firmware files used by the Linux kernel
vzkernel-headers.i686 : Header files for the Linux kernel for use by glibc
vzkernel-headers.x86_64 : Header files for the Linux kernel for use by glibc

If the machine of x86_64 architecture and the base OS(CentOS) installed is 64-bit, install vzkernel.x86_64

# yum install vzkernel.x86_64 vzkernel-devel.x86_64 vzkernel-headers.x86_64

Check  /boot/grub/menu.lst. It must have the entry for the new kernel

title OpenVZ (2.6.32-042stab057.1)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-042stab057.1 ro root=UUID=fffff7aa-57b8-40aa-baa4-588c4eff7651 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
        initrd /initramfs-2.6.32-042stab057.1.img

# yum install vzctl vzquota

Ensure /etc/sysctl.conf has the following entries

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.icmp_echo_ignore_broadcasts=1 

net.ipv4.conf.default.forwarding=1
net.ipv4.conf.default.proxy_arp = 0
kernel.sysrq = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.eth0.proxy_arp=1

Update the new kernel settings
# sysctl -p

Disable SELINUX . In the file, /etc/selinux/config, set
SELINUX=disabled

Reboot the machine
# shutdown -r now

In the grub menu, a new kernel will apppear(2.6.32-042stab057.1, in my case). Boot this kernel

Once booted, Check that a new network interface, venet0, exists

# ifconfig venet0
venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:374 errors:0 dropped:0 overruns:0 frame:0
          TX packets:454 errors:0 dropped:6 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:37637 (36.7 KiB)  TX bytes:39793 (38.8 KiB)

Also, check if the vz service is running
/etc/init.d/vz status
OpenVZ is running...

Installation of VPS with OS of our choice

For creating virtual machines(VPS) with OpenVZ, we need to have templates for the distribution (centos, debian, ubuntu) that we want to install


The precreated distribution templates can be downloaded from  http://wiki.openvz.org/Download/template/precreated



Download the precreated template distributions of your choice and store it under /vz/template/cache



# wget http://download.openvz.org/template/precreated/debian-6.0-x86_64.tar.gz -P /vz/template/cache



# wget http://download.openvz.org/template/precreated/centos-6-x86_64.tar.gz -P /vz/template/cache

I have downloaded the precreated template distributions of centos 64 bit and debian 64 bit.


To set up a VPS from the centos template, run the following command

# vzctl create 101 --ostemplate  centos-6-x86

Creating container private area (centos-6-x86)

Performing postcreate actions

CT configuration saved to /etc/vz/conf/101.conf

Container private area was created



Here, 101 is the uid of the newly created VPS.


To set a hostname and IP address for the vm, run:
# vzctl set 101 --hostname centos32 --save
# vzctl set 101 --ipadd 192.168.2.101 --save
The ipaddress 192.168.2.101 is assigned to venet0:0

The above steps can be done in a single step
# vzctl create 101 --ostemplate  centos-6-x86 --ipadd 192.168.2.101 --hostname centos32 

To get a list of all vms and their statuses, run
# vzlist -a

After the creation, initialize the created VPS via:
# vzctl start 101
Starting container ...
Container is mounted
Adding IP address(es): 192.168.2.101
Setting CPU units: 1000
Container start in progress...

If you want to have the vm started at boot, run
# vzctl set 101 --onboot yes --save

You can now enter into the VPS by simply SSH'ing into it or via the following command:
#vzctl enter 101

To set a root password for the vm, execute
#vzctl exec 101 passwd

Suppose we want to add an additional ip address, this
# vzctl set 101 --save --ipadd 192.168.2.102
The ipaddress 10.10.37.102 is assigned to venet0:1

Suppose we want to remove an ip address from the VPS
# vzctl set 101 --save --ipdel 192.168.2.102

To leave the vm's console, type
# exit

To stop a vm, run
# vzctl stop 101

To restart a vm, run
# vzctl restart 101

To delete a vm from the hard drive (it must be stopped before you can do this), run
# vzctl destroy 101

To find out about the resources allocated to a vm, run
# vzctl exec 101 cat /proc/user_beancounters
Version: 2.5
       uid  resource                     held              maxheld              barrier                limit              failcnt
      101:  kmemsize                  3409952              4366336  9223372036854775807  9223372036854775807                    0
            lockedpages                     0                    0  9223372036854775807  9223372036854775807                    0
            privvmpages                 10732                11784  9223372036854775807  9223372036854775807                    0
            shmpages                      129                  129  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0                    0                    0                    0
            numproc                        17                   36  9223372036854775807  9223372036854775807                    0
            physpages                    5378                 6261                    0                65536                    0
            vmguarpages                     0                    0  9223372036854775807  9223372036854775807                    0
            oomguarpages                 1566                 1566  9223372036854775807  9223372036854775807                    0
            numtcpsock                      4                    4  9223372036854775807  9223372036854775807                    0
            numflock                        4                    5  9223372036854775807  9223372036854775807                    0
            numpty                          0                    1  9223372036854775807  9223372036854775807                    0
            numsiginfo                      0                   27  9223372036854775807  9223372036854775807                    0
            tcpsndbuf                   69760                69760  9223372036854775807  9223372036854775807                    0
            tcprcvbuf                   65536                65536  9223372036854775807  9223372036854775807                    0
            othersockbuf                 4624                59320  9223372036854775807  9223372036854775807                    0
            dgramrcvbuf                     0                 4360  9223372036854775807  9223372036854775807                    0
            numothersock                   30                   48  9223372036854775807  9223372036854775807                    0
            dcachesize                1341986              1363836  9223372036854775807  9223372036854775807                    0
            numfile                       319                  390  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            numiptent                      20                   20  9223372036854775807  9223372036854775807                    0

The failcnt column is very important, it should contain only zeros; if it doesn't, this means that the vm needs more resources than are currently allocated to the vm. Open the vm's configuration file in /etc/vz/conf and raise the appropriate resource, then restart the vm.


VPS disk space

By default, each VPS created is allocated 2GB disk space and 200000 inodes
# vzctl exec 101 df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/simfs            2.0G  570M  1.5G  28% /
none                  128M  4.0K  128M   1% /dev
none                  128M     0  128M   0% /dev/shm

# vzctl exec 101 df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/simfs            200000   21507  178493   11% /
none                   32768     151   32617    1% /dev
none                   32768       1   32767    1% /dev/shm

To increase the available disk space from the default 2GB to something more useful like 10GB: 

This will not immediately consume 10GB of space by the container, but will allocate a maximum of 10GB of hard drive space to it.

# vzctl set 101 --diskspace 10G:11G --save
CT configuration saved to /etc/vz/conf/101.conf

The above command increases the default 2GB drive space available to a barrier of 10GB and a maximum limit of 11GB. The upper limit allows for some grace; the disk space permitted will be 10GB but if it exceeds it, it won’t be restricted from that resource until the barrier is hit. This gives the container a 1GB “buffer.”

# vzctl exec 101 df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/simfs             10G  570M  9.5G   6% /
none                  128M  4.0K  128M   1% /dev
none                  128M     0  128M   0% /dev/shm

There are two ways to change settings for containers. 
  • The first is using vzctl as above (remember to use the –save option to make the changes persistent). 
  • The second is to edit the configuration file for the container. For a container with a CTID of 101, the file would be /etc/sysconfig/vz-scripts/101.conf. This file can be used to change options to the container and can also be used to see what existing configuration settings are.

Connecting VPS to Internet

In the main node(not VPS), on which the VPS is being run, run the follwoing command
# iptables -L

If such an entry is observed
REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

then add these two rules

iptables -A FORWARD -s xxx.xxx.xxx.xxx/xx -j ACCEPT
iptables -A FORWARD -d xxx.xxx.xxx.xxx/xx -j ACCEPT

NOTE: For example, if VPS IP address range is in 192.168.2.1 ~ 192.168.2.254, xxx.xxx.xxx.xxx/xx will be 192.168.2.0/24. So we shall add

# iptables -A FORWARD -s 192.168.2.0/24 -j ACCEPT
# iptables -A FORWARD -d 192.168.2.0/24 -j ACCEPT

Make sure that these two rules are placed above the rule 

"REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited"

Edit /etc/sysconfig/iptables and restart the iptables service (service iptables restart). Now iptables will be listed as follows

#iptables -L
...
ACCEPT     all  --  192.168.2.0/24       anywhere
ACCEPT     all  --  anywhere             192.168.2.0/24
REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

Add a POSTROUTING chain and a MASQUERADE target
# iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

Now set the nameserver of main node as the nameserver for the VPS too
# vzctl set 101 --nameserver 192.168.1.1 --save
CT configuration saved to /etc/vz/conf/101.conf







Friday, July 20, 2012

Enable ssh server in MacBook

To log in to your macbook through ssh, ssh server need to be enabled in macbook.
ssh server can be enabled in macbook as follows

1) Go to System Preferences ----->  Internet & Wireless -----> Sharing
2) Check Remote Login box
3) ssh server is now enabled in your macbook. You can even set access restrictions based on users.
4) Click the lock at the bottom to prevent further changes

Monday, July 9, 2012

Creating LVM in single disk in CentOS


Logical volume management is a widely-used technique for deploying logical storage rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext4/ext3 "raw" partitions).

Say, I have a 160GB hard disk and want to create LVM in this disk. Here is the way to go

LVM (Logical Volume Management) makes use of the device-mapper feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout.

So first load the device mapper module.

# modprobe dm-mod
# lsmod | grep dm_mod


Creating logical volume partitions involves 3 steps
  1. Creating Physical Volumes
  2. Forming Volume Group using the Physical Volumes
  3. Creating Logical Volume partitions on the Volume Group

Step 1 : Create a physical volume


Firstly, we need to create disk partitions of type LVM.  Say, now I want to create 3 disk partitions of type LVM.  I shall be using these 3 partitions like separate disks, for creating logical partitions on them. Since I have 125GB out of 160GB, free in my hard disk(not allocated to any partition earlier and no file system written on it), I use that space to create 3 partitions of size around 41GB each. fdisk comes to the aid of it and here is the way. Basically, fdisk creates raw partitions

[root@dhcppc1 ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').


Command (m for help): p

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e6ea7


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39        2589    20480000   83  Linux
/dev/sda3            2589        3226     5120000   82  Linux swap / Solaris
/dev/sda4            3226       19457   130380128+   5  Extended


Creating new partitions now 

Command (m for help): n
First cylinder (3226-19457, default 3226):
Using default value 3226
Last cylinder, +cylinders or +size{K,M,G} (3226-19457, default 19457): +41G


Command (m for help): n
First cylinder (8579-19457, default 8579):
Using default value 8579
Last cylinder, +cylinders or +size{K,M,G} (8579-19457, default 19457): +41G


Command (m for help): n
First cylinder (13932-19457, default 13932):
Using default value 13932
Last cylinder, +cylinders or +size{K,M,G} (13932-19457, default 19457):
Using default value 19457


Command (m for help): p


Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e6ea7


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39        2589    20480000   83  Linux
/dev/sda3            2589        3226     5120000   82  Linux swap / Solaris
/dev/sda4            3226       19457   130380128+   5  Extended
/dev/sda5            3226        8578    42994529+  83  Linux
/dev/sda6            8579       13931    42997941   83  Linux
/dev/sda7           13932       19457    44387563+  83  Linux
.
The partitions 5(/dev/sda5), 6(/dev/sda6), 7(/dev/sda7) are the newly created raw partitions
Need to convert the newly created partitions to lvm type

Command (m for help): t
Partition number (1-7): 5
Hex code (type L to list codes): 8e
Changed system type of partition 5 to 8e (Linux LVM)


Command (m for help): t
Partition number (1-7): 6
Hex code (type L to list codes): 8e
Changed system type of partition 6 to 8e (Linux LVM)


Command (m for help): t
Partition number (1-7): 7
Hex code (type L to list codes): 8e
Changed system type of partition 7 to 8e (Linux LVM)


Command (m for help): p


Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e6ea7


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39        2589    20480000   83  Linux
/dev/sda3            2589        3226     5120000   82  Linux swap / Solaris
/dev/sda4            3226       19457   130380128+   5  Extended
/dev/sda5            3226        8578    42994529+  8e  Linux LVM
/dev/sda6            8579       13931    42997941   8e  Linux LVM
/dev/sda7           13932       19457    44387563+  8e  Linux LVM


Finally press w write table to disk and exit and reboot
Command (m for help): w
The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

Now, reboot the system.

Creating physical volume using the newly created raw partitions

# pvcreate /dev/sda5
# pvcreate /dev/sda6

The pvdisplay command displays all physical volumes on your system.

# pvdisplay 

[root@dhcppc1 ~]# pvdisplay
  "/dev/sda5" is a new physical volume of "41.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda5
  VG Name
  PV Size               41.00 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               V0ObJ7-y6E3-kJrv-Zee2-SSOe-46lB-303xuf

  "/dev/sda6" is a new physical volume of "41.01 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda6
  VG Name
  PV Size               41.01 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               5cDz8n-C43V-juxj-0DTD-2yMD-u5R0-fI4hVB

Step 2 : Create a Volume Group - VolGroup

Using the two physical volumes created in previous step using pvcreate command, a volume group can be formed. Let us create a volume group named as, VolGroup (choose any name of your choice), for example.

[root@dhcppc1 ~]# vgcreate  VolGroup /dev/sda5 /dev/sda6
  Volume group "VolGroup" successfully created

Use vgdisplay command to display the details about the Volume Group created

[root@dhcppc1 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               82.00 GiB
  PE Size               4.00 MiB
  Total PE              20993
  Alloc PE / Size       0 / 0
  Free  PE / Size       20993 / 82.00 GiB
  VG UUID               ZcdhvM-5Q5O-dffd-CA7e-JuQC-mIqM-SffOpx

Additional PVs can be added to this volume group using the vgextend command

# pvcreate /dev/sda7
  Writing physical volume data to disk "/dev/sda7"
  Physical volume "/dev/sda7" successfully created

# vgextend VolGroup /dev/sda7
  Volume group "VolGroup" successfully extended

# vgdisplay
  --- Volume group ---
  VG Name               VolGroup
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               124.33 GiB
  PE Size               4.00 MiB
  Total PE              31829
  Alloc PE / Size       0 / 0
  Free  PE / Size       31829 / 124.33 GiB
  VG UUID               ZcdhvM-5Q5O-dffd-CA7e-JuQC-mIqM-SffOpx

The newly added PV(/dev/sda7) can be removed from volume_group_one by the vgreduce command

# vgreduce VolGroup /dev/sda7
  Removed "/dev/sda7" from volume group "VolGroup"

# vgdisplay VolGroup
  --- Volume group ---
  VG Name               VolGroup
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               82.00 GiB
  PE Size               4.00 MiB
  Total PE              20993
  Alloc PE / Size       0 / 0
  Free  PE / Size       20993 / 82.00 GiB
  VG UUID               ZcdhvM-5Q5O-dffd-CA7e-JuQC-mIqM-SffOpx

Adding the Physical Volume(/dev/sda7) Back

[root@dhcppc1 ~]# vgextend VolGroup /dev/sda7
  Volume group "VolGroup" successfully extended

[root@dhcppc1 ~]# vgdisplay VolGroup
  --- Volume group ---
  VG Name               VolGroup
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               124.33 GiB
  PE Size               4.00 MiB
  Total PE              31829
  Alloc PE / Size       0 / 0
  Free  PE / Size       31829 / 124.33 GiB
  VG UUID               ZcdhvM-5Q5O-dffd-CA7e-JuQC-mIqM-SffOpx

Step 3 : Creating Logical Volume(LV) partitions


On the Volume Group named VolGroup, now I want to create three logical partitions for home, music and opt. I create a
  • 20GB linear LV named logical_volume_home
  • 20GB linear LV named logical_volume_music and 
  • 5GB linear LV named logical_volume_opt 
from  volume group VolGroup using the lvcreate command as follows:

[root@dhcppc1 ~]# lvcreate -L20GB -n logical_volume_home VolGroup
  Logical volume "logical_volume_home" created
[root@dhcppc1 ~]# lvcreate -L5GB -n logical_volume_opt VolGroup
  Logical volume "logical_volume_opt" created
[root@dhcppc1 ~]# lvcreate -L20GB -n logical_volume_music VolGroup
  Logical volume "logical_volume_music" created

Display status of Logical Volumes

[root@dhcppc1 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup/logical_volume_home
  VG Name                VolGroup
  LV UUID                mK0iA2-bYwU-iRFX-bFAN-Ghcg-Q0DO-f1HGVe
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0


  --- Logical volume ---
  LV Name                /dev/VolGroup/logical_volume_opt
  VG Name                VolGroup
  LV UUID                x9aPWe-yGEk-WybC-hJAE-ikKI-ftuc-e3pQRJ
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


  --- Logical volume ---
  LV Name                /dev/VolGroup/logical_volume_music
  VG Name                VolGroup
  LV UUID                1meVhn-aa8m-RIBH-pWWk-QlXn-Thgl-atXFkN
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

Output a report of Logical Volumes

[root@dhcppc1 ~]# lvs
  LV                   VG       Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  logical_volume_home  VolGroup -wi-a- 20.00g
  logical_volume_music VolGroup -wi-a- 20.00g
  logical_volume_opt   VolGroup -wi-a-  5.00g

Create File system on logical volumes

mkfs.ext4 -m 0 /dev/VolGroup/logical_volume_vz
mkfs.ext4 -m 0 /dev/VolGroup/logical_volume_opt
mkfs.ext4 -m 0 /dev/VolGroup/logical_volume_music

the -m option specifies the percentage reserved for the super-user, set this to 0 if you wish not to waste any space, the default is 5%.

Edit /etc/fstab for mounting

Add an entry for your newly created logical volume into /etc/fstab

/dev/VolGroup/logical_volume_vz         /vz           ext4     defaults      0 2
/dev/VolGroup/logical_volume_music   /music       ext4     defaults      0 2
/dev/VolGroup/logical_volume_opt       /opt           ext4     defaults      0 2

Mount the partitions without rebooting


mkdir /vz /music /opt    (Creating mount points)
mount -a


Displaying the newly mounted partitions

mount
df -h

Extending a logical volume partition

Say now we want to extend the /vz (/dev/VolGroup/logical_volume_vz) partition size by 5GB.
A LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:

# lvextend -L12G /dev/VolGroup/logical_volume_vz

will extend LV /dev/VolGroup/logical_volume_vz to 12 GB, while

# lvextend -L+5G /dev/VolGroup/logical_volume_vz (This is what Iam going to do now)
 Extending logical volume logical_volume_vz to 25.00 GiB
  Logical volume logical_volume_vz successfully resized

will extend LV /dev/VolGroup/logical_volume_vz by an additional 5 GB. Once a logical volume has been extended, the underlying file system has to be expanded to exploit the additional storage now available on the LV. To resize ext4fs, the following commands need to be run

umount /vz
e2fsck -f /dev/VolGroup/logical_volume_vz
resize2fs /dev/VolGroup/logical_volume_vz

This will extend the ext4 file system to completely fill the LV, /dev/VolGroup/logical_volume_vz, on which it resides.


Remounting the newly created partition
# mount -a

Removing a logical volume from Volume Group

lvdisplay
umount /dev/VolGroup/logical_volume_opt
lvremove /dev/VolGroup/logical_volume_opt