|

SSD Enabled For DreamHost Shared Hosting: Simple Performance Measurement

SSD is common for VPS and PaaS virtual machines for higher I/O performance. Now, it is coming to shared hosting too.

DreamHost states that “Now with solid state drives (SSDs), our standard web hosting loads pages 200% faster”.

We ourselves are happy to know this performance improvement with the price kept the same. Good work, DreamHost.

Simple I/O performance measurement on DreamHost shared hosting account

To see how does the new SSD-based storage performs, we did simple tests in one account on one server of DreamHost’s shared hosting. The test is simple: moving files from the home (mounted XFS filesystem which stores the site content) and /dev/shm/ which is a in-memory file system or using dd go generate files. The results will reflect how the reads and write perform.

Note that we do not have the root access to flush buffers and invalidate caches, and there are many concurrent users. Hence, the server should be considered relatively busy, and the result here should only be viewed as a general “feeling” of the performance.

Moving small files

The commands and results are shown as follows.

test@seginus:~$ ls -lha 49MBfile.pdf
-rwxr-xr-x 1 test pg000000 49M Mar  5 23:23 49MBfile.pdf

test@seginus:~$ time mv ./49MBfile.pdf /dev/shm/
real    0m0.602s
user    0m0.012s
sys 0m0.176s

test@seginus:~$ time mv /dev/shm/49MBfile.pdf ./
real    0m0.307s
user    0m0.008s
sys 0m0.216s

test@seginus:~$ time mv ./49MBfile.pdf /dev/shm/
real    0m0.291s
user    0m0.016s
sys 0m0.172s

test@seginus:~$ time mv /dev/shm/49MBfile.pdf ./
real    0m0.225s
user    0m0.000s
sys 0m0.208s

test@seginus:~$ time mv ./49MBfile.pdf /dev/shm/
real    0m0.331s
user    0m0.000s
sys 0m0.188s

test@seginus:~$ time mv /dev/shm/49MBfile.pdf ./
real    0m0.262s
user    0m0.000s
sys 0m0.216s

If we consider the largest 0.602s as the worst time for accessing the 49MB file, the throughput is still around 81.4 MB/s.

dd and moving large files

The commands and results are as follows.

test@seginus:~$ dd if=/dev/zero of=./testfile bs=512M count=1 oflag=direct
1+0 records in
1+0 records out
536870912 bytes (537 MB) copied, 9.26255 s, 58.0 MB/s
test@seginus:~$ time mv testfile /dev/shm/

real    0m4.802s
user    0m0.052s
sys 0m2.148s

The write and read throughput are 58.0 MB/s and 106.6 MB/s.

Although these numbers may not represent the exact actual I/O throughput for your site as the server may be busier or freer depending on the load, these results do reflect that the performance is good regarding the low price of the shared hosting plan.

Similar Posts

  • |

    How to Install Hyperledger Fabric 2.0 in Ubuntu 18.04

    Hyperledger Fabric is a consortium blockchain system. It’s performance is relatively good and its modular architecture enables it to be usable in many scenarios. Hyperledger Fabric itself has rich documents and samples of test networks. For beginners, deploying a new network for trying and testing still consumes quite some time. In this post, we will…

  • OCaml Learning Materials

    OCaml is an interesting functional language. There are lots learning materials on the Internet. I compile a list of resources for OCaml learning and reference. Recommended OCaml learning and reference material Online book of Real World OCaml by Yaron Minsky, Anil Madhavapeddy, Jason Hickey. A very good tutorial by Jason Hickey: http://www.cs.caltech.edu/courses/cs134/cs134b/book.pdf. The OCaml system…

  • Improving Font Rendering for Fedora Using Bytecode Interpreter

    Fedora’s font rendering isn’t very nice. At least on my laptop with Fedora 12. Bytecode Interpreter (BCI for short) is disabled by default because of patent issues. As the TrueType bytecode patents have expired. We may enable BCI in Fedora now. TrueType announced that BCI is enabled by default from 2.4. Fedora 12’s TrueType version…

  • How to force a checkpointing of metadata in HDFS?

    HDFS SecondaraNameNode log shows 2017-08-06 10:54:14,488 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint java.io.IOException: Inconsistent checkpoint fields. LV = -63 namespaceID = 1920275013 cTime = 0 ; clusterId = CID-f38880ba-3415-4277-8abf-b5c2848b7a63 ; blockpoolId = BP-578888813-10.6.1.2-1497278556180. Expecting respectively: -63; 263120692; 0; CID-d22222fd-e28a-4b2d-bd2a-f60e1f0ad1b1; BP-622207878-10.6.1.2-1497242227638. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357) It seems the checkpoint…

  • |

    Linux Kernel 4.19.70 Release

    This post summarizes new features, bugfixes and changes in Linux kernel release 4.19.70. Linux 4.19.70 Release contains 95 changes, patches or new features. In total, there are 101,521 lines of Linux source code changed/added in Linux 4.19.70 release compared to Linux 4.19 release. To view the source code of Linux 4.19.70 kernel release online, please…

  • |

    Creating a Child Process using posix_spawn in C in Linux

    The posix_spawn() and posix_spawnp() functions create a new child process from the specified process image constructed from a regular executable file. It can be used to replace the relative complex “fork-exec-wait” methods with fork() and exec(). However, compared to fork() and exec(), posix_spawn() is less introduced if you search on the Web. The posix_spawn() manual…

One Comment

  1. I got my touch on another server for shared hosting in DreamHost and run the `dd` test again. The results show higher write and read throughput on this server as follows.

    ericzma@situla:~$ dd if=/dev/zero of=./testfile bs=512M count=1 oflag=direct
    1+0 records in
    1+0 records out
    536870912 bytes (537 MB) copied, 6.95622 s, 77.2 MB/s
    ericzma@situla:~$ time mv testfile /dev/shm/
    
    real	0m4.190s
    user	0m0.016s
    sys	0m2.452s
    

    My guess is that this server is not as busy as the previous server I tested in the post.

Leave a Reply

Your email address will not be published. Required fields are marked *