sudoedit.com!

Working with logical volumes (part 2)

In this post I want to cover one of the most commonly used features of lvm, extending a logical volume. If you were following along with the last post, "Working with logical volumes part 1", then you should already have a volume group with a couple of live volumes attached.

With lvm you can quickly and easily extend a Linux file system on the fly without interrupting any services.

Becoming familiar with our lvm environment.

The first thing we should do is become familiar with our current lvm environment. Open your terminal and type sudo pvscan.

The pvscan command will display some basic information about your volume groups (if you have any). After running pvscan you should see the following output (assuming that you are following along from the previous post).

sudo pvscan    
  PV /dev/sdb    VG vgtest          lvm2 [10.00 GiB / 6.00 GiB free]

From the above we can see a few things. First is the location of our physical volume (/dev/sdb). Next is the Volume group it is attached to vgtest. Last you can see the size, 10 GB with 6GB free.

You can get more detailed output with pvdisplay

sudo pvdisplay
     --- Physical volume ---
    PV Name               /dev/sdb
    VG Name               vgtest
    PV Size               10.00 GiB / not usable 4.00 MiB
    Allocatable           yes
    PE Size               4.00 MiB
    Total PE              2559
    Free PE               1535
    Allocated PE          1024
    PV UUID               tVrpRc-U1yi-IMdP-ONog-jDFh-pjej-VfX9ct

Check the state of our current live volumes by running lvscan.

sudo lvscan

    ACTIVE     '/dev/vgtest/testlv' [2.00 GiB] inherit
    ACTIVE     '/dev/vgtest/omg_testlv' [2.00 GiB] inherit

In this case I have two logical volumes testlv and omg_testlv each of which is 2 GB in size. Once again we can see more information on these volumes by running lvdisplay

sudo lvdisplay

     --- Logical volume ---
    LV Path                /dev/vgtest/testlv
    LV Name                testlv
    VG Name                vgtest
    LV UUID                zw2JB7-gmmZ-4llC-FLf1-jJhE-B82R-Xz4Cxc
    LV Write Access        read/write
    LV Creation host, time fedora01, 2017-11-01 21:24:33 -0400
    LV Status              available
    # open                 1
    LV Size                2.00 GiB
    Current LE             512
    Segments               1
    Allocation             inherit
    Read ahead sectors     auto
    - currently set to     256
    Block device           253:2
    --- Logical volume ---
    LV Path                /dev/vgtest/omg_testlv
    LV Name                omg_testlv
    VG Name                vgtest
    LV UUID                h6eLKT-Aq5u-UXem-dgtA-OIXs-z901-lNRhNp
    LV Write Access        read/write
    LV Creation host, time fedora01, 2017-11-01 21:25:28 -0400
    LV Status              available
    # open                 1
    LV Size                2.00 GiB
    Current LE             512
    Segments               1
    Allocation             inherit
    Read ahead sectors     auto
    - currently set to     256
    Block device           253:3

Running vgdisplay much like pvdisplay and lvdisplay will output detailed information about our volume group.

sudo vgdisplay
    --- Volume group ---
    VG Name               vgtest
    System ID
    Format                lvm2
    Metadata Areas        1
    Metadata Sequence No  3
    VG Access             read/write
    VG Status             resizable
    MAX LV                0
    Cur LV                2
    Open LV               2
    Max PV                0
    Cur PV                1
    Act PV                1
    VG Size               10.00 GiB
    PE Size               4.00 MiB
    Total PE              2559
    Alloc PE / Size       1024 / 4.00 GiB
    Free  PE / Size       1535 / 6.00 GiB
    VG UUID               HkxLMU-ncGp-pKIc-l35e-Fbx0-EU80-Iublex

So, now that we have familiarized ourselves with the lvm environment that we previously set up, we know a few things.

  1. We have one physical volume, /dev/sdb which is 10 GB in size.
  2. That physical volume contains the volume group vgtest.
  3. The volume group vgtest contains two logical volumes testlv and omg_testlv each of which are 2 GB in size.

Resize a logical volume

Let's resize one of our logical volumes. Both of our volumes are 2 GB, but lets make testlv 4 GB using the lvextend command.

sudo lvextend /dev/vgtest/testlv -L 4G
  Size of logical volume vgtest/testlv changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
  Logical volume vgtest/testlv successfully resized.

In this case I used the -L option to tell lvm that I wanted the final size of the volume to be 4G, alternatively you could use -L +2G to the same effect. Or you can use lowercase L -l to specify extents directly. I suggest you read the man page for lvextend if you plan to use it on a regular basis.

Once again run lvscan to verify that your volume has indeed expanded to 4 GB.

sudo lvscan
ACTIVE            '/dev/vgtest/testlv' [4.00 GiB] inherit
ACTIVE            '/dev/vgtest/omg_testlv' [2.00 GiB] inherit

Also notice that your /testlv file system has not changed. It is still 2 GB as far as the file system is concerned.

df -h
    Filesystem                     Size  Used Avail Use% Mounted on
    devtmpfs                       937M     0  937M   0% /dev
    tmpfs                          949M     0  949M   0% /dev/shm
    tmpfs                          949M  2.0M  947M   1% /run
    tmpfs                          949M     0  949M   0% /sys/fs/cgroup
    /dev/mapper/fedora-root         26G  6.2G   18G  26% /
    tmpfs                          949M   16K  949M   1% /tmp
    /dev/sda2                      976M  140M  770M  16% /boot
    /dev/sda1                      200M  9.5M  191M   5% /boot/efi
    /dev/mapper/vgtest-testlv      2.0G   35M  2.0G   2% /testlv
    /dev/mapper/vgtest-omg_testlv  2.0G   35M  2.0G   2% /omgtestlv
    tmpfs                          190M   16K  190M   1% /run/user/42
    tmpfs                          190M     0  190M   0% /run/user/1000

Notice the highlighted line. Why didn't the file system size increase?

When you do an extend operation keep in mind that you must also grow the underlying file system in order to make the additional space usable.

We formatted our file system as xfs in the last tutorial so we will use xfs_growfs to allow the file system to use the additional storage space.

sudo xfs_growfs /testlv
    meta-data=/dev/mapper/vgtest-testlv isize=512    agcount=4, agsize=131072 blks
    =                       sectsz=4096  attr=2, projid32bit=1
    =                       crc=1        finobt=1 spinodes=0 rmapbt=0
    =                       reflink=0
    data     =                       bsize=4096   blocks=524288, imaxpct=25
    =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal               bsize=4096   blocks=2560, version=2
    =                       sectsz=4096  sunit=1 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    data blocks changed from 524288 to 1048576

Check the size again with df -h This time you should notice that /testlv is now 4 GB.

df -h
    Filesystem                     Size  Used Avail Use% Mounted on
    devtmpfs                       937M     0  937M   0% /dev
    tmpfs                          949M     0  949M   0% /dev/shm
    tmpfs                          949M  2.0M  947M   1% /run
    tmpfs                          949M     0  949M   0% /sys/fs/cgroup
    /dev/mapper/fedora-root         26G  6.2G   18G  26% /
    tmpfs                          949M   16K  949M   1% /tmp
    /dev/sda2                      976M  140M  770M  16% /boot
    /dev/sda1                      200M  9.5M  191M   5% /boot/efi
    /dev/mapper/vgtest-testlv      4.0G   37M  4.0G   1% /testlv
    /dev/mapper/vgtest-omg_testlv  2.0G   35M  2.0G   2% /omgtestlv
    tmpfs                          190M   16K  190M   1% /run/user/42

The lvextend command has another option, -r will automatically increase the size of the file system without the need to call an additional command to extend the file system. Try it on the omg_testlv volume.

sudo lvextend /dev/vgtest/omg_testlv -L 4G -r
    Size of logical volume vgtest/omg_testlv changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
    Logical volume vgtest/omg_testlv successfully resized.
    meta-data=/dev/mapper/vgtest-omg_testlv isize=512    agcount=4, agsize=131072 blks
    =                       sectsz=4096  attr=2, projid32bit=1
    =                       crc=1        finobt=1 spinodes=0 rmapbt=0
    =                       reflink=0
    data     =                       bsize=4096   blocks=524288, imaxpct=25
    =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal               bsize=4096   blocks=2560, version=2
    =                       sectsz=4096  sunit=1 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    data blocks changed from 524288 to 1048576

Notice this time the output of the lvextend command includes the xfx_growfs output. That is because lvm is smart enough to know how to resize common file systems for you, so that you only need to run one command to get the desired results.

Once you are done I suggest that you run through the initial familiarization steps again so that you can see the changes that we made in this tutorial. Compare the new output with the old.

In the next post we will add a new disk to volume group and learn how to move all of our data off of one disk and onto another.

#Linux #lvm