Ubuntu: Ubuntu - messed up partition table



Question:

I have an ubuntu 14.04 server vm on a Xen server. I have 40Gb disk space allocated to it and my partition table looks like this

df -h  Filesystem                               Size  Used Avail Use% Mounted on  udev                                      16G  4.0K   16G   1% /dev  tmpfs                                    3.1G  712K  3.1G   1% /run  /dev/mapper/QAAutomationServer--vg-root  8.3G  7.1G  797M  91% /  none                                     4.0K     0  4.0K   0% /sys/fs/cgroup  none                                     5.0M     0  5.0M   0% /run/lock  none                                      16G     0   16G   0% /run/shm  none                                     100M     0  100M   0% /run/user  /dev/xvda1                               236M   68M  156M  31% /boot  

if you take a closer looks, you can see that my root partition has about 8gb allocated to it and is almost full. /run/shm has 16gb and /dev has another 16gb which are barely used.

I tried looking on google to findout if I can fix this issue but every solution I find suggests booting up using live cd and using gparted to manage the partitions. But being on a vm on Xen, I can't do that.

Can anyone please help me fix this issue?

Thanks, Kiran

Edit

output of sudo parted -l

Model: Linux device-mapper (linear) (dm)  Disk /dev/mapper/QAAutomationServer--vg-root: 9135MB  Sector size (logical/physical): 512B/512B  Partition Table: loop    Number  Start  End     Size    File system  Flags   1      0.00B  9135MB  9135MB  ext4      Model: Linux device-mapper (linear) (dm)  Disk /dev/mapper/QAAutomationServer--vg-swap_1: 33.6GB  Sector size (logical/physical): 512B/512B  Partition Table: loop    Number  Start  End     Size    File system     Flags   1      0.00B  33.6GB  33.6GB  linux-swap(v1)      Model: Xen Virtual Block Device (xvd)  Disk /dev/xvda: 42.9GB  Sector size (logical/physical): 512B/512B  Partition Table: msdos    Number  Start   End     Size    Type      File system  Flags   1      1049kB  256MB   255MB   primary   ext2   2      257MB   42.9GB  42.7GB  extended   5      257MB   42.9GB  42.7GB  logical                lvm  

Edit2

I did some research on lvm and learned that I can resize the logical partitions using lvextend and lvreduce.

here is the output of lvdisplay

lvdisplay    --- Logical volume ---    LV Path                /dev/QAAutomationServer-vg/root    LV Name                root    VG Name                QAAutomationServer-vg    LV UUID                ZRnyaa-fDlK-ulAH-2rcv-Haga-lxuU-TB0kqb    LV Write Access        read/write    LV Creation host, time QAAutomationServer, 2015-09-04 11:57:16 -0700    LV Status              available    # open                 1    LV Size                8.51 GiB    Current LE             2178    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           252:0      --- Logical volume ---    LV Path                /dev/QAAutomationServer-vg/swap_1    LV Name                swap_1    VG Name                QAAutomationServer-vg    LV UUID                QTPf2n-y8CA-FZDL-3xLH-33BX-mZIv-Zx1Jyu    LV Write Access        read/write    LV Creation host, time QAAutomationServer, 2015-09-04 11:57:17 -0700    LV Status              available    # open                 0    LV Size                31.25 GiB    Current LE             8000    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           252:1  

So I am trying lvreduce -L -5g /xyz on the swap aprtition and then lvextend -L +5g /abc on the root partition.

and once that is done, I am doing a sudo resize2fs /def on the root partition

I am not sure this will ensure no loss of data but I can see that I have additional space on the root partition now.

df -h  Filesystem                               Size  Used Avail Use% Mounted on  udev                                      16G  4.0K   16G   1% /dev  tmpfs                                    3.1G  712K  3.1G   1% /run  /dev/mapper/QAAutomationServer--vg-root   14G  7.1G  5.6G  57% /  none                                     4.0K     0  4.0K   0% /sys/fs/cgroup  none                                     5.0M     0  5.0M   0% /run/lock  none                                      16G     0   16G   0% /run/shm  none                                     100M     0  100M   0% /run/user  /dev/xvda1                               236M   68M  156M  31% /boot  


Solution:1

What you've shown at the top of your note is not your partition table. It's a list of your mounted systems.
/run/shm and /dev are not 'real' filesystems. That are virtual filesystems located in the RAM of your computer. They do not use any of your disk space.
What is really interesting is:

Model: Xen Virtual Block Device (xvd)  Disk /dev/xvda: 42.9GB  [...]  Number  Start   End     Size    Type      File system  Flags   1      1049kB  256MB   255MB   primary   ext2   2      257MB   42.9GB  42.7GB  extended   5      257MB   42.9GB  42.7GB  logical                lvm  

Your virtual disk device (~42GB of size) is divided on two primary partitions: 1: boot (mounted ad /boot) 2. extended In your extended partition you have one partition for lvm. LVM stands for Linux Volume Manager. It creates a virtual block device, which you can use as regular block device, but extend or shrink 'on the fly', and build it with several physical volumes, which is very handy in some situations.

On that virtual device you have two virtual partitions:
First:

/dev/mapper/QAAutomationServer--vg-root: 9135MB  

which is mounted as your root partition, and second:

/dev/mapper/QAAutomationServer--vg-swap_1: 33.6GB  

which is your swap.

As you can see swap consumes over 3/4 of your whole drive.

So my advice is: 1. Turn off swap 2. Delete/shrink swap partition 3. Expand root partition 4. Resize rootfs (this can be done probably on 'live' filesystem) 5. Create swap partition using remaining space 6. Turn on swap

Here you'll find how to deal with LVM volumes: http://www.tecmint.com/extend-and-reduce-lvms-in-linux/


Solution:2

Since I did my initial setup using lvm, I was able to use lvm to resize my logical volumes and then resize the root partition.

https://wiki.ubuntu.com/Lvm

lvdisplay - lists all lvm partitions available

lvreduce -L -5g /lvmpartition - reduces the selected lvmpartition by 5G

lvextend -L +5g /lvmpartition - extends the selected lvmpartition by 5G

Since I had about 30G allocated to swap, I was able to shrink it to 10G and extend the lvm-root partition with the resulting 20G free space.

Here is the output of lvdisplay after the modifications

  LV Path                /dev/QAAutomationServer-vg/root    LV Name                root    VG Name                QAAutomationServer-vg    LV UUID                ZRnyaa-fDlK-ulAH-2rcv-Haga-lxuU-TB0kqb    LV Write Access        read/write    LV Creation host, time QAAutomationServer, 2015-09-04 11:57:16 -0700    LV Status              available    # open                 1    LV Size                30.51 GiB    Current LE             7810    Segments               3    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           252:0      --- Logical volume ---    LV Path                /dev/QAAutomationServer-vg/swap_1    LV Name                swap_1    VG Name                QAAutomationServer-vg    LV UUID                QTPf2n-y8CA-FZDL-3xLH-33BX-mZIv-Zx1Jyu    LV Write Access        read/write    LV Creation host, time QAAutomationServer, 2015-09-04 11:57:17 -0700    LV Status              available    # open                 0    LV Size                9.25 GiB    Current LE             2368    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           252:1  

now I did sudo resize2fs on the root partition and here is the output of df -h

df -h  Filesystem                               Size  Used Avail Use% Mounted on  udev                                      16G  4.0K   16G   1% /dev  tmpfs                                    3.1G  712K  3.1G   1% /run  /dev/mapper/QAAutomationServer--vg-root   30G  7.1G   22G  25% /  none                                     4.0K     0  4.0K   0% /sys/fs/cgroup  none                                     5.0M     0  5.0M   0% /run/lock  none                                      16G  4.0K   16G   1% /run/shm  none                                     100M     0  100M   0% /run/user  /dev/xvda1                               236M   68M  156M  31% /boot  

This boosted the space on my root partition. Hope this answer helps someone with similar problems.


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »