An I/O Performance Comparison Between loopback Backed and blktap Backed Xen File-backed VBD

I have done some I/O performance benchmark test of Xen DomU. For easier management, some of our DomU VMs are using file-backed VBDs. Previously, our VMs are using Loopback-mounted file-backed VBDs. But blktap-based support are recommended by Xen community. Before considering changing from loopback based VBD to blktap based VBD, I have done this performance bench comparison.

Note: if your VM is I/O intensive, you may consider setting up [[setting-up-lvm-backed-xen-domu|LVM backed DomU]]. Check the [[xen-domus-io-performance-of-lvm-and-loopback-backed-vbds|performance comparison]].

The hardware platform:

DomU:

CPU: 2 x Intel(R) Xeon(R) CPU E5520 @ 2.27GHz

Memory: 1G

HD:

Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
              ext4     16G  2.0G   13G  14% /
/dev/xvda1    ext3    194M   23M  162M  13% /boot
tmpfs        tmpfs    517M     0  517M   0% /dev/shm

Dom0:

The raw image file is stored on a ext4 partition.

Test method

Bonnie++ 1.03c

Using default parameter.

Result

Loopback driver backed:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
vm101         2064M 25511  35 18075   3 199488  47 71094  98 937880  86 +++++ +++
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
vm101,2064M,25511,35,18075,3,199488,47,71094,98,937880,86,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

blktap driver backed:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
vm101         2064M 69438  96 93549  20 38118  10 54955  76 131645   8 249.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 29488  79 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
vm101,2064M,69438,96,93549,20,38118,10,54955,76,131645,8,249.1,0,16,29488,79,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

From the result we can see that the loopback backed VBD has better read performance with high CPU usage while it has worse write performance. The blktap backed VBD has a more balanced performance. It has a much better write speed than the loopback backed one. With a bit worse read performance, we can get a much better over all performance. So from the view of performance, blktap driver is better than the loopback driver for Xen DomU’s VBD usage.

There are some other benefits we can get by using blktap driver. The loopback file-backed VBDs may not be appropriate for backing I/O-intensive domains because this approach is known to experience substantial slowdowns under heavy I/O workloads, due to the I/O handling by the loopback block device used to support file-backed VBDs in dom0 [1]. Another reason is the blktap can provides better scalability than loopback backed driver. Linux only support at most eight loopback file-backed VBDs across all domains by default. If we want to have more than eight loopback devices, the max_loop=n boot option should be passed to the kernel or module depends on whether CONFIG_BLK_DEV_LOOP is conpiled as a module of the Dom0 kernel. The method can be found at [[add-more-loop-device-on-linux]]. And some other advantages such as easily support for metadata disk formats such as Copy-on-Write, encrypted disks, sparse formats and other compression features, avoid the flushing dirty pages problem which are present in the Linux blktap driver, and some more [2].

Referrences

[1] http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/
[2] http://wiki.xensource.com/xenwiki/blktap

Similar Posts

  • How to import OCaml libraries

    How to import 3rd party libraries (e.g. not standard libraries) in OCaml? An answer by Gabriel Scherer on how to import Batteries is just great and answers this question with much information. Although it is for Batteries, the method is general. The OCaml compiler (or toplevel, etc.) will find with no additional information only the…

  • MFC程序使用系统风格界面

    VC6默认编译出来的程序在XP下Luma风格下运行也是Windows的经典界面, 有损界面的美观与统一. VC2008默认设置下如果不是使用的unicode也是如此. 本文给出使VC6和VC2008可以编译出使用系统界面风格的解决方案. 1. 使VC6编译出使用系统风格的程序 步骤如下: 1) 创建一个.manifest文件的资源. 在res/文件夹下创建一个跟以程序名加.manifest的文件, 如果程序为test.exe, 则创建test.exe.manifest 文件可由此下载: https://www.systutorials.com/t/g/programming/resultcollector.manifest/ 注意要使用utf-8编码保存。 2) 将新定义的资源加入到.rc2文件中, 类型设为24. 打开res/文件夹下的.rc2文件, 在其中加入如下定义: 1 24 MOVEABLE PURE “res/test.exe.manifest” 其中的文件地址按1)步中修改的设置即可. 之后编译即可, 为了使程序界面可能充分利用系统的界面特性, 可以将界面字体设置为TrueType类型的, 利用Windows XP等系统的屏幕字体平滑特性. 2. 使VC2008编译出使用系统风格的程序 在VC2008下就比较简单了, 如果程序字符集使用unicode则默认就是使用系统界面风格的, 如果选择其它的类型, 则编辑下stdafx.h即可. 最后面部分找到这么一段: #ifdef _UNICODE #if defined _M_IX86 #pragma comment(linker,”/manifestdependency:”type=’win32′ name=’Microsoft.Windows.Common-Controls’ version=’6.0.0.0′ processorArchitecture=’x86′ publicKeyToken=’6595b64144ccf1df’ language=’*'””) #elif defined _M_IA64 #pragma comment(linker,”/manifestdependency:”type=’win32’…

  • |

    Linux Kernel: xt_quota: report initial quota value instead of current value to userspace

    This change “xt_quota: report initial quota value instead of current value to userspace” (commit 49daf6a) in Linux kernel is authored by Changli Gao <xiaosuo [at] gmail.com> on Fri Jul 23 14:07:47 2010 +0200. Description of “xt_quota: report initial quota value instead of current value to userspace” The change “xt_quota: report initial quota value instead of…

Leave a Reply

Your email address will not be published. Required fields are marked *