Ubuntu: Why do I get disk I/O errors booting the 3.2 kernel on a xen vps server?


I have a xen vps, which I just upgraded to the new LTS 12 Precise Pangolin.

However, I see this error on booting:

[   12.848076] end_request: I/O error, dev xvda, sector 12841  [   12.848093] end_request: I/O error, dev xvda, sector 12841  [   12.848103] Buffer I/O error on device xvda1, logical block 1605  [   12.848110] lost page write due to I/O error on xvda1  [   12.848129] Aborting journal on device xvda1.  

Results in / being mounted read-only. Reboot:

[    3.087257] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Marking fs in need of filesystem check.  [    3.087677] EXT3-fs (xvda1): recovery complete  [    3.088514] EXT3-fs (xvda1): mounted filesystem with ordered data mode  Begin: Running /scripts/local-bottom ... done.  done.  Begin: Running /scripts/init-bottom ... done.  fsck from util-linux 2.20.1  PRGMRDISK1 contains a file system with errors, check forced.  Checking disk drives for errors. This may take several minutes.  Press C to cancel all checks in progress  PRGMRDISK1: ***** REBOOT LINUX *****  PRGMRDISK1: 371152/6001184 files (2.8% non-contiguous), 4727949/12000000 blocks  mountall: fsck / [308] terminated with status 3  mountall: System must be rebooted: /  [  151.566949] Restarting system.  Name                                        ID   Mem VCPUs      State   Time(s)  shadowmint                                 236  2048     1     --p---      0.0  

Reboot -> back to 1.

This is definitely an issue with the 3.2 kernel, because booting the 3.0.0 or 2.6.38 kernel series make this issue magically disappear.

I'm certain this is some kind of weird xen thing, but no idea.


Anyhow, until this is resolve I strongly recommend against upgrading if you're running a xen server.


I noticed that there is a special kernel for virtual machines:

linux-image-3.2.0-23-virtual - Linux kernel image for version 3.2.0 on 64 bit x86 Virtual Guests linux-image-extra-3.2.0-23-virtual - Linux kernel image for version 3.2.0 on 64 bit x86 Virtual Guests

Maybe that will solve it?


I had similar problem, but similar problems does not involve necessarily similar causes :-D. I solved my problem by adding the nobarrier (mount option) in /etc/fstab to the root partition mounting line:

UUID=7960e41c-6ad3-458e-ba0b-289c43a7508f / ext4 nobarrier 0 1  

After the first successful reboot I've executed

dmesg | grep barrier  

and this is what I have got:

[    0.690596] blkfront: xvda: barrier: enabled   [   12.914802] blkfront: xvda: empty barrier op failed  [   12.914807] blkfront: xvda: barrier or flush: disabled   [   14.806961] EXT4-fs (xvda1): re-mounted. Opts: errors=continue,nobarrier  

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »