Terremark cost comparison with Amazon EC2

(Earlier posts in this series are: EC2 cost break down, GoGrid & EC2 cost comparison, Rackspace & EC2 cost comparison)

In this post, let us compare the VM cost between Terremark vCloud express and Amazon EC2. Terremark is one of the first cloud providers based on VMWare technology. Unlike EC2, Rackspace and GoGrid, which use Xen as the hypervisor, Terremark uses VMWare’s ESX hypervisor, which arguably is richer in functionality.

Following the methodology we have used so far, we need to first understand Terremark’s hardware infrastructure and its resource allocation policy. Using the same technique we used for EC2’s hardware analysis, we determine that Terremark runs on a platform with two sockets of Quad-core AMD Opteron 8389 processors. PassMark does not have a benchmark result for this processor, so we have to run the benchmark ourselves. We used the 16GB+8VPU configuration — its largest — to minimize interference from other VMs, and we run it multiple times late at night to ensure that we are indeed measuring the underlying hardware’s capacity. On average, the PassMark CPU mark result is 7100, which is roughly 18 ECU.

Terremark uses the ESX hypervisor’s default policy for scheduling CPU, i.e., a core shares the CPU equally with another core regardless of how much memory the VM has. This is different from GoGrid and Rackspace where the CPU is shared proportional to the amount of RAM a VM has. The scheduling policy can be verified by reading the GuestSDK API exposed by VMTools. By reading the API, we know that a VM not only has no minimum guaranteed CPU, but it also does not have a maximum burst limit. Each virtual core of a VM is assigned a CPU share of 1000, regardless of the memory it is allocated. Thus, the more cores a VM has, the more shares of the CPU it will get (e.g., 1VPU has 1000 shares, and 8VPU has 8000 shares).

It is difficult to determine how many VMs could be on a physical host, which determines the minimum guaranteed CPU. We are told in their forum that each physical host has 128GB of memory, which can accommodate at least 8 VMs, for example, each with 8 VPU+16GB RAM (its largest configuration). VMWare ESX hypervisor allows over-committing memory, so in theory, there could be many more VMs on a host. When we launched a vanilla 512MB VM, we learned from the Guest API that our VM only occupied 148MB RAM. Clearly, there is lots of room to over-commit, even though we see no evidence that they are doing so. Assuming there is no over-commitment, there still could be a lot of VMs competing for the CPU. In the worst case, all VMs on the host have 512MB RAM and 8VPU, which consume the least memory, but gain the maximum CPU weights. A physical host can host 256 such VMs, leaving a negligible CPU share for each VM. If a VM has only one core, it owns only 1/(8*256) share of the CPU, and an 8 VPU (8 virtual cores) VM owns only 1/256 share of the CPU.

Following what we did to get EC2’s unit cost, we can run regression analysis to estimate Terremark’s unit cost. We assume the Cost = c * CPU + m * RAM (Terremark charges storage separately from the VM cost at $0.25/GB/month). The regression determines the unit cost to be

c = 2.06 cents/VPU/hour
m = 6.46 cents/GB/hour

The regression result does not fit the real cost very well. The following table shows both the original cost (color green) and the cost as determined by the estimated parameters (color red) for the various VM configurations.

memory (GB)\CPU 1 VPU 2 VPU 4 VPU 8 VPU
0.5 3.5 / 5.29 4 / 7.36 4.5 / 11.48 5 / 19.72
1 6 / 8.53 7 / 10.6 8 / 14.7 10 / 23
1.5 9 / 11.8 10.5 / 13.8 12 / 17.9 13.6 / 26.2
2 12 / 15 14.1 / 17 16.1 / 21.2 20.1 / 29.4
4 24.1 / 27.9 28.1 / 30 30.1 / 34.1 40.2 / 42.4
8 40.2 / 53.8 48.2 / 55.8 60.2 / 60 80.3 / 68.2
12 60.2 / 79.6 72.3 / 81.7 90.3 / 85.8 120.5 / 94.1
16 80.3 / 105.5 96.4 / 107.5 120.5 / 111.7 160.6 / 112

The reason that the regression analysis does not work well here is that Terremark heavily discounts both CPU and RAM as you move up in the configuration. Our linear model does not capture the economy of scale very well. However, we can think of the linear regression as a trend line, and the trend line indicates that Terremark is likely more expensive than EC2. For example, it costs 6.46 cents/GB/hour for its RAM, which is much higher than the 2.01 cents Amazon values its RAM at.

Another way to compare cost is to use EC2’s unit cost to figure out what an equivalent configuration will cost in EC2. The following table shows the cost comparison where we assume you can only get the minimum CPU at the worst case, where all other VMs are busy and a physical host is fully loaded with 8VPU+0.5GB VMs (without over-commitment). Each row shows the RAM and CPU configuration, Terremark’s price, what it would cost in EC2, and the ratio between Terremark and EC2 cost.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 1.02 3.44
0.5 2 4 1.03 3.89
0.5 4 4.5 1.05 4.27
0.5 8 5 1.10 4.54
1 1 6 2.02 2.97
1 2 7 2.03 3.44
1 4 8 2.06 3.89
1 8 10 2.11 4.75
1.5 1 9 3.03 2.97
1.5 2 10.5 3.04 3.45
1.5 4 12 3.06 3.92
1.5 8 13.6 3.11 4.37
2 1 12 4.03 2.98
2 2 14.1 4.05 3.49
2 4 16.1 4.07 3.96
2 8 20.1 4.12 4.88
4 1 24.1 8.06 2.99
4 2 28.1 8.07 3.48
4 4 30.1 8.09 3.72
4 8 40.2 8.14 4.94
8 1 40.2 16.1 2.5
8 2 48.2 16.11 2.99
8 4 60.2 16.13 3.73
8 8 80.3 16.18 4.96
12 1 60.2 24.14 2.49
12 2 72.3 24.15 2.99
12 4 90.3 24.18 3.73
12 8 120.5 24.23 4.97
16 1 80.3 32.18 2.49
16 2 96.4 32.2 2.99
16 4 120.5 32.22 3.74
16 8 160.6 32.27 4.98

The table shows that Terremark is 2.49 to 4.98 times more expensive than an equivalent in EC2. This is mainly due to the way Terremark shares CPUs. A 0.5GB VM in Terremark shares the CPU equally with a 16GB VM; thus, in the worst case, a VM may get very little CPU. Since Terremark does not set a minimum guarantee on the CPU share in the hypervisor, we have to assume the worst case.

In reality, you are unlikely to encounter the worst case, and you are very likely to get the full attention of a physical core. The reason is not only because the majority of VMs have more than 0.5GB (so that you can pack fewer of them on a host), but also because Terremark uses VMWare’s DRS (Distributed Resource Scheduler). We have noticed that, when we drive up the load on our VMs, our VMs are often moved (through VMotion) to a different host, presumably to avoid contention. Thus, unless the whole cluster gets really busy, it is unlikely that your VM would have a lot of other busy VMs to contend with on the same host. The following table shows the EC2 equivalent cost assuming a virtual core can get the full power of the physical core.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 4.09 0.86
0.5 2 4 7.17 0.56
0.5 4 4.5 13.33 0.34
0.5 8 5 25.65 0.19
1 1 6 5.09 1.18
1 2 7 8.17 0.86
1 4 8 14.33 0.56
1 8 10 26.66 0.38
1.5 1 9 6.1 1.48
1.5 2 10.5 9.18 1.14
1.5 4 12 15.34 0.78
1.5 8 13.6 27.66 0.49
2 1 12 7.1 1.69
2 2 14.1 10.18 1.38
2 4 16.1 16.35 0.98
2 8 20.1 28.67 0.7
4 1 24.1 11.12 2.17
4 2 28.1 14.21 1.98
4 4 30.1 20.37 1.48
4 8 40.2 32.69 1.23
8 1 40.2 19.17 2.1
8 2 48.2 22.25 2.17
8 4 60.2 28.41 2.12
8 8 80.3 40.73 1.97
12 1 60.2 27.21 2.21
12 2 72.3 30.29 2.39
12 4 90.3 36.45 2.48
12 8 120.5 48.78 2.47
16 1 80.3 35.25 2.28
16 2 96.4 38.33 2.51
16 4 120.5 44.5 2.71
16 8 160.6 56.82 2.83

There are several configurations where Terremark is much cheaper than EC2. The 8VPU+0.5GB configuration is the cheapest at 19% of the equivalent EC2 cost. This is due to two reasons. First, the 8 VPU has more scheduling weight, and it can compete for the full power of the physical host. Second, the RAM is the smallest. As we have seen, Terremark values RAM more than EC2 does (m=6.46 cents/GB/hour vs. EC2 m=2.01 cents/GB/hour), so the less RAM a configuration has, the lower the cost. The cost savings go away as you add more RAM and more CPU to the configuration.

Rackspace cost comparison with Amazon EC2

(Earlier posts in this series are: EC2 cost break down, GoGrid & EC2 cost comparison)

We looked at Amazon EC2 and GoGrid cost earlier. Let us examine another IaaS provider — Rackspace cloud. The first step again is to unify on the same unit of measurement on the CPU power. Using the same methodology as we used for EC2’s hardware analysis, we determine that Rackspace runs on a platform with two sockets of Quad-Core AMD Opteron 2374 HE processor. According to PassMark-CPU Mark results, this platform has a CPU mark score of 4642, which is roughly 12 ECU. Rackspace cloud’s FAQ states that “For Linux distributions, each Cloud Server is assigned four virtual cores and the amount of CPU cycles allocated to these cores is weighted based on the size of the Cloud Server.” From talking to Rackspace support, we know that each physical host has 32GB of RAM, and it can host at most 2 16GB (15.5GB to be precise) VMs. Therefore, a 16GB VM would own the complete 4 cores it is allocated, i.e., the 16GB VM has a guaranteed capacity of half of the platform, which is 6 ECU. Since Rackspace states that the CPU is proportionally shared based on the RAM, we can derive the minimum guaranteed CPU based on how many other VMs could fit on the same physical host. The following table lists the minimum CPU and the maximum CPU (assuming full bursting when all other VMs are idle). Again, we are only concerned about Linux VMs, as they do not include license costs, so they more accurately represent the true hardware cost.

RAM (GB) Storage (GB) Min CPU (ECU) Max CPU (ECU) Cost (cents/hour)
0.256 10 0.09375 6 1.5
0.512 20 0.1875 6 3
1 40 0.375 6 6
2 80 0.75 6 12
4 160 1.5 6 24
8 320 3 6 48
16 620 6 6 96

Similar to GoGrid, Rackspace only charges based on the RAM, so it is not possible to determine how it values each component (i.e., CPU, RAM and storage) separately, as we have done for EC2. However, it is possible to project what a similar configuration would cost in EC2 using the unit cost we have derived from the EC2 cost breakdown. The results are shown in the following table where we assume a VM only gets its minimum guaranteed CPU. Each row corresponds to one VM configuration, which is denoted by its RAM size in the first column. We also show the ratio between the Rackspace cost and the projected equivalent EC2 cost.

RAM (GB) Rackspace cost (cents/hour) Equivalent EC2 cost (cents/hour) Rackspace cost/EC2 cost
0.256 1.5 0.8 1.87
0.512 3 1.6 1.87
1 6 3.16 1.9
2 12 6.32 1.9
4 24 12.6 1.9
8 48 25.3 1.9
16 96 50.2 1.91

Since a Rackspace VM can burst if other VMs on the same host are idle, it could potentially grab a much larger share of the CPU. The following table shows the cost comparison assuming that the VM bursts to its fullest extent.

RAM (GB) Rackspace cost (cents/hour) Equivalent EC2 cost (cents/hour) Rackspace cost/EC2 cost
0.256 1.5 8.89 0.17
0.512 3 9.56 0.31
1 6 10.86 0.55
2 12 13.5 0.89
4 24 18.8 1.28
8 48 29.4 1.63
16 96 50.2 1.91

If your VM is only getting the minimum guaranteed CPU, Rackspace is about 1.9 times more expensive than an equivalent in EC2. However, in our experience, we can frequently grab a much larger share of the CPU. Assuming you can grab the full 4 cores, the 256MB, 512MB, 1GB, and 2GB VMs are a great bargain, which are 17%, 31%, 55%, and 89% of the equivalent EC2 cost respectively.

GoGrid cost comparison with Amazon EC2

updated 1/30/2011 to include our own PassMark benchmark result and include GoGrid’s prepaid plan. Then updated 2/1/2011 to include cost/ECU comparison and clarifications.

(Other posts in the series are: EC2 cost break down, Rackspace & EC2 cost comparison, Terremark and EC2 cost comparison).

Continue on our series on cost comparison between IaaS cloud providers, we will look at GoGrid’s cost structure in this post. It is easier to compare RAM and storage apple-to-apple because all cloud providers standardize on the same unit, e.g., GB. To have a meaningful comparison on CPU, we must similarly standardize on a common unit of measurement. Unfortunately, the cloud providers do not make this easy, so we have to do the conversion ourselves.

Because Amazon is a popular cloud provider, we decide to standardize on its unit of measurement — the ECU (Elastic Compute Unit). In our EC2 hardware analysis, we concluded that an ECU is equivalent to a PassMark-CPU Mark score of roughly 400. We have run the benchmark in Amazon’s N. Virginia data center on several types of instances to verify experimentally that the CPU Mark score does scale linearly as the instance’s advertised ECU rating.

All we need to do now is to figure out GoGrid’s PassMark-CPU Mark number. This is easy to do if we know the underlying hardware. Following the same methodology we used for the EC2 hardware analysis, we find that the GoGrid infrastructure consists of two types of hardware platform: one with dual-socket Intel E5520 processors, another with dual-socket Intel X5650 processors. According to PassMark-CPU mark results, we know the dual-socket E5520 has a score of 9,174 and the dual-socket X5650 has a score of 15,071. GoGrid enables hyperthreading, so the dual-socket E5520 platform has 16 cores, and the dual-socket X5650 platform has 24 cores. Hyperthreading does not really double the performance because there is still only one physical core which is hardware-threaded by two virtual cores.

Instead of relying on PassMark’s reported result, we also run the benchmark ourselves to get a true measure of performance. We run the benchmark late at night for several times to make sure that the result is stable and that we are getting the maximum CPU allowed by bursting. PassMark benchmark only runs on Windows OS, and in Windows, we can only see up to 8 cores. As a result, the 8GB(8cores) and 16GB(8cores) VMs both return a CPU mark result of roughly 7850, which is 19.5 ECU. The 4GB(4cores) VM returns a CPU mark result of roughly 3,800, which is 9.6 ECU. And, the 2GB(2cores) VM returns a CPU mark of roughly 1,900, which is 4.8 ECU. Since there are no 1GB(1core) or 0.5GB(1core) Windows VM, we project their maximum CPU power to be half of a 2-core VM at 2.4 ECU. Lastly, since we cannot measure the 16 cores performance, we use the reported E5520 benchmark result of 9174 from PassMark instead as its maximum, which is 23 ECU. These numbers determine the maximum CPU when bursting full. Based on GoGrid’s VM configuration, we can then determine the minimum guaranteed CPU from maximum CPU.

The translation from GoGrid’s CPU allocation to an equivalent ECU is shown in the following table. Each row of the table corresponds to one GoGrid’s VM configuration, where we list the amount of CPU, RAM and storage in each configuration. We also list GoGrid’s current pay-as-you-go VM price as the last column for reference.

Min CPU (cores) Min CPU (ECU) Max CPU (cores) Max CPU (ECU) RAM (GB) Storage (GB) pay-as-you-go Cost (cents/hour)
0.5 1.2 1 2.4 0.5 25 9.5
1 2.4 1 2.4 1 50 19
1 2.4 2 4.8 2 100 38
3 7.2 4 9.6 4 200 76
6 14.4 8 19.2 8 400 152
8 19.2 16 23 16 800 304

One way to compare GoGrid and EC2 is to purely look at the cost per ECU. The following table shows the cost/ECU for GoGrid VMs assuming all of them get the maximum possible CPU. We list two cost/ECU results, one based on their pay-as-you-go price of $0.19/RAM-hour, another based on their Enterprise cloud prepaid plan of $0.05/RAM-hour.

RAM (GB) Max CPU (ECU) pay-as-you-go cost/ECU
(cents/ECU/hour)
prepaid cost/ECU
(cents/ECU/hour)
0.5 2.4 3.96 1.04
1 2.4 7.91 2.08
2 4.8 7.91 2.08
4 9.6 7.91 2.08
8 19.2 7.91 2.08
16 23 13.2 3.48

In comparison, the following table shows EC2 cost/ECU for the nine different types of instances in the N. Virginia data center.

instance CPU (ECU) RAM (GB) cost/ECU (cents/ECU/hour)
m1.small 1 1.7 8.5
m1.large 4 7.5 8.5
m1.xlarge 8 15 8.5
t1.micro 0.35 0.613 5.71
m2.xlarge 6.5 17.1 7.69
m2.2xlarge 13 34.2 7.69
m2.4xlarge 26 68.4 7.69
c1.medium 5 1.7 3.4
c1.xlarge 20 7 3.4

Comparing on cost/ECU only makes sense when your application is CPU bound, i.e., your memory requirement is always less than what the instance gives you.

Here, we propose a different way, comparing them by taking into account the CPU, the RAM and storage allocation altogether. Ideally, if we can derive the unit cost of each, we can straightforwardly compare. Unfortunately, GoGrid charges purely based on RAM hours, it is not possible to figure out how it values CPU, RAM and storage separately, like we have done for Amazon EC2. If we do a regression analysis, the result will show that CPU and storage cost nothing, and RAM bears all the cost.

Since we cannot compare the unit cost, we propose a different approach. Basically, we take one VM configuration from GoGrid, and try to figure out what a hypothetical instance with the exact same specification would cost in EC2 if Amazon were to offer it. We can project what EC2 would charge for such a hypothetical instance because we know EC2’s unit cost from our EC2 cost break down.

The following table shows what a VM will cost in EC2 if the same configuration is offered there, assuming we only get the minimum guaranteed CPU. Each row of the table corresponds to one GoGrid VM configuration, where we only list the RAM size for that configuration (see the previous table for a configuration’s CPU and storage size). We also show the ratio between the GoGrid pay-as-you-go price and the projected EC2 cost.

RAM (GB) GoGrid pay-as-you-go cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/hypothetical EC2 cost
0.5 9.5 3.05 3.12
1 19 6.09 3.12
2 38 8.9 4.27
4 76 21.1 3.6
8 152 42.2 3.6
16 304 71.2 4.27

Unlike EC2, other cloud providers, including GoGrid, all allow a VM to burst beyond their minimum guaranteed capacity if there are free cycles available. The following table compares the cost under the optimistic scenario where you get the maximum CPU possible.

RAM (GB) GoGrid pay-as-you-go cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 9.5 4.69 2.03
1 19 6.1 3.12
2 38 12.2 3.12
4 76 24.4 3.12
8 152 48.7 3.12
16 304 76.4 3.98

As Paul from GoGrid pointed out, GoGrid also offers a prepaid plan that is significantly cheaper than the pay-as-you-go plan. This is different from Amazon’s reserved instance where you get a discount if you pay an up-front fee. Although cheaper, Amazon’s reserved instance pricing only applies to that one instance you reserved, and when you need to dynamically scale, you cannot benefit from the lower price. GoGrid’s prepaid plan allows you to use the discount on any instances. To see the benefits of buying bulk, we also compare EC2 cost with GoGrid’s Enterprise Cloud prepaid plan, which costs $9,999 a month, but entitles you to 200,000 RAM hours at $0.05/hour. For brevity, we do not compare with other prepaid plans, which you can easily do yourself following our methodology.

The following table shows what a VM will cost in EC2 if the same configuration is offered there, assuming we only get the minimum guaranteed CPU.

RAM (GB) GoGrid Enterprise cloud pre-paid cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 2.5 3.05 0.82
1 5 6.09 0.82
2 10 8.9 1.12
4 20 21.1 0.95
8 40 42.2 0.95
16 80 71.2 1.12

The following table compares the cost under the optimistic scenario where you get the maximum CPU possible.

RAM (GB) GoGrid enterprise cloud pre-paid cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 2.5 4.69 0.53
1 5 6.1 0.82
2 10 12.2 0.82
4 20 24.4 0.82
8 40 48.7 0.82
16 80 76.4 1.05

Under GoGrid’s pay-as-you-go plan, we can see that GoGrid is 2 to 4 times more expensive than a hypothetical instance in EC2 with an exact same specification. However, if you can buy bulk, the cost is significantly lower. The smaller 0.5GB server could be as cheap as 53% of the cost of an equivalent EC2 instance.

The true cost of an ECU

How do you compare the cost of two cloud or IaaS offerings? Is Amazon EC2’s small instance (1 ECU, 1.7GB RAM, 160GB storage) cheaper or is Rackspace cloud’s 256MB server (4 cores, 256MB RAM, 10GB storage) cheaper? Unfortunately, answering this question is very difficult. One reason is that cloud vendors have been offering virtual machines with different configurations, i.e., different combinations of CPU power, memory and storage, making is difficult to perform an apple-to-apple comparison.

Towards the goal of a better apple-to-apple comparison, I will break down the cost for CPU, memory and storage individually for Amazon EC2 in this post. For those not interested in understanding the methodology, the high level conclusions are as follows. In Amazon’s N. Virginia data center, the unit costs are:

  • 1 ECU costs $0.01369/hour
  • 1 GB of RAM costs $0.0201/hour
  • 1 GB of local storage costs $0.000159/hour
  • A 10GB network interface costs $0.41/hour
  • A GPU costs $0.52/hour

Before we can break down the cost, we have to know what an instance’s (Amazon’s term for a virtual machine) cost consists of. We assume the cost includes solely the cost of its CPU, its memory, and its local storage space. This means that there is no fixed cost component, for example, to account for the hardware chassis, or to account for the static IP address. We make this assumption purely for simplicity. In practice, it makes little difference to the end result even if we assume there is a fixed cost component. We also note that the instance cost does not include the cost for the network bandwidth consumed, which is always charged separately, at least in the cloud providers we looked at.

Let us assume the instance cost is a linear function of the three components, i.e., Cost = c * CPU + m * Mem + s * Storage, where c, m and s are the unit cost of CPU, memory and local storage respectively. It is fortunate that Amazon EC2 offers several types of instances, each type of instance has a different combination of CPU, memory and storage, which offers us a clue of what each component costs. Combining the many types of instances, we can estimate the parameters c, m and s by using a least-square regression analysis. Let us first look at Amazon’s N. Virginia data center. We only use Linux instances’ hourly cost as the instance cost to avoid accounting for an OS’s licensing cost. The results from least-square regression are:

s = 0.0159 cent/GB/hour
m = 2.01 cent/GB/hour
c = 1.369  cent/ECU/hour

The linear model and the estimation actually match the real data really well. The following table shows the instances we used for regression. The last column shows the instance cost as predicted by our estimated parameters, and the second-to-last column shows the real EC2 cost. As you can see, the two costs actually match fairly well, suggesting that a linear model is a good approximation. We should note that we mark the Micro instance to have 0.35 ECU. This is an average of its ECU allocation as we have shown in our Micro instance analysis.

instance CPU(in ECU) RAM(in GB) Storage(in GB) Instance cost per hour (in cents) Fitted instance cost per hour (in cents)
m1.small 1 1.7 160 8.5 7.33
m1.large 4 7.5 850 34 34.07
m1.xlarge 8 15 1,690 68 67.97
t1.micro 0.35 0.613 0 2 1.71
m2.xlarge 6.5 17.1 420 50 49.96
m2.2xlarge 13 34.2 850 100 100.1
m2.4xlarge 26 68.4 1,690 200 200
c1.medium 5 1.7 350 17 15.83
c1.xlarge 20 7 1,690 68 68.32

It should come as no surprise that the memory is actually a significant component of the instance cost. Next time when you compare two cloud offerings, make sure to compare the RAM available.

In the estimation, we did not include EC2 cluster instances and cluster GPU instances, because they are different from other instances (both have a 10GB network interface and one has a GPU). But, now that we have a unit cost for CPU, memory and storage, we can estimate what those extra features cost.

For a cluster instance, combining the cost of CPU (33.5 ECU), memory (23GB), and storage (1690 GB) using our estimated parameters, the cost comes out to be $1.19/hour. Since Amazon charges $1.60/hour, the extra charge must be for the 10GB interface, which is the only feature that is different from other instances. Subtracting the two, the 10GB interface costs $0.41/hour.

For a cluster GPU instance, combining the cost of CPU (33.5 ECU), memory (22GB), and storage (1690 GB), the cost comes out to be $1.17/hour. Since Amazon charges $2.10/hour, the extra charge much be for the 10GB interface and the GPU. Subtracting the two costs and taking out the 10GB interface cost, we know the GPU costs $0.52/hour.

We can perform the same analysis for the other 3 Amazon data centers: N. California, Ireland and Singapore. Luckily, their cost structures are the same, so I only need to present one result. The unit costs are as follows:

s = 0.0169 cent/GB/hour
m = 2.316 cent/GB/hour
c = 1.575 cent/ECU/hour

The actual instance cost and the projected instance cost are as shown in the following table. Again, they agree very well. There are no cluster and cluster GPU instances in other data centers, so no cost for the 10GB interface and the GPU is shown.

instance CPU(in ECU) RAM(in GB) Storage(in GB) Instance cost per hour (in cents) Fitted instance cost per hour (in cents)
m1.small 1 1.7 160 9.5 8.22
m1.large 4 7.5 850 38 38.07
m1.xlarge 8 15 1,690 76 75.97
t1.micro 0.35 0.613 0 2.5 1.97
m2.xlarge 6.5 17.1 420 57 56.96
m2.2xlarge 13 34.2 850 114 114.1
m2.4xlarge 26 68.4 1,690 228 228
c1.medium 5 1.7 350 19 17.74
c1.xlarge 20 7 1,690 76 76.34

Dimensions to use to compare NoSQL data stores

You have decided to use a NoSQL data store in favor of a DBMS store, possibly due to scaling reasons. But, there are so many NoSQL stores out there, which one should you choose? Part of the NoSQL movement is the acknowledgment that there are tradeoffs, and the various NoSQL projects have pursued different tradeoff points in the design space. Understanding the tradeoffs they have made, and figuring out which one fits your application better is a major undertaking.

Obviously, choosing the right data store is a much bigger topic, which is not something that can be covered in a single blog. There are also many resources comparing the various NoSQL data stores, e.g., here, so that there is no point repeating them. Instead, in this post, I will highlight the dimensions you should use when you compare the various data stores.

Data model

Obviously, you must choose a data model that matches your application. In SQL, there is only one, i.e., the relational model, so you have to fit your application into the relational model. Luckily, in the NoSQL world, you have a number of choices. They can be grouped into roughly four categories: key-value blob, column-oriented data store (e.g., BigTable-alike), document-based data store, and graph data store. The graph data store will fit, well…, graph problems (obviously) very well. We find that the column-oriented and document-based data store have roughly the same expressive power, and a variety of applications can fit well. In comparison, the key-value blob storage has a much simpler data model, which limits the number of applications that may fit.

Consistency

Amazon popularized the concept of “eventual consistency”, basically giving up consistency in favor of higher scalability. The application has to get around the limitation posed by the eventual consistency model, since it is the only one who understands the semantics of the data. One example is Amazon’s shopping cart application. Using their Dynamo backend, an item in the shopping cart may reappear after you have deleted it. That happens because the application choose to keep the item when the data is inconsistent and when it needs to reconcile the view.

In the weak consistency model, it is also important to compare the data store on how they reconcile inconsistencies. Some data stores, such as MongoDB and Cassandra, uses timestamp to reconcile, i.e., the last writer wins. The downside of this approach is that the timestamp needs to be accurately synchronized, which is very difficult if you want a finer resolution. Making it worse, Cassandra uses client’s timestamp, so you have to make sure your clients’ (not the storage nodes’) clock are properly synchronized. Other data stores, such as Riak, uses vector clock to reconcile. The downside of this approach is that the reconciliation has to happen in the application because you need to understand the data semantic in order to reconcile.

If you cannot tolerate a weaker consistency model, or if it is too cumbersome for you to handle the reconciliation, you may want to consider a data store that supports a stronger consistency model, such as HBase and MongoDB. Cassandra supports a tunable consistency level, so you can use Cassandra and tune up the consistency level. Alternatively, you can use a BigTable clone, such as HBase and Hypertable, which supports a strong consistency model. This is cited as one of the reasons Facebook used HBase rather than Cassandra recently.

Atomic test-and-set

In CPUs, atomic test-and-set is a required instruction, and it is the building-block primitive to eliminate race condition in multi-processor environment. Suppose you want to increase a counter by 1. You have to read the counter’s current value first, increment it by 1, then write back the result. If someone else reads the counter after you read it, but write back the result before you write it, then your write is lost, and it is over-written by the other guy. Atomic test-and-set guarantees that no one can come in between your read and write.

Unfortunately, in NoSQL data stores, this is not a mandatory feature. There are several ways to get around it. First, with the flexible schema support, it is a common practice to aggressively create new columns on the fly, and avoid writing over old data. This works well if new writes are less often, but if you constantly write new data (e.g., increment the counter every second), you will end up with lots of garbage data that needs to be cleaned up later. Second, you can avoid the problem by making sure that only one agent updates the data. This gets harder to manage when you have many agents.

If you cannot use either work around in your application, you need to look for a data store that supports atomic test-and-set. Amazon’s SimpleDB, Yahoo PNUTS, Google BigTable, MongoDB all support some flavors of test-and-set. Unfortunately, other popular data stores, such as Cassandra, does not support atomic read-and-set.

Secondary index

There is no join capability from any of the NoSQL data stores. In order to support a richer data relationship and a faster lookup and retrieval for certain data items, you may need secondary index support. MongoDB supports secondary index, and both HBase and Cassandra have some early stage support for secondary index. Although not a secondary index, Riak supports links, which can link an item to another, so that you can build a richer relationship.

Manageability

Each data store has its own tools to help you automate the management, but its architecture dictates how much automation could be achieved. A symmetric architecture is a lot easier to manage and to reason about. Data stores, such as Cassandra and Riak, has only one type of nodes, and all nodes perform the same function. Other data stores have a master/slave architecture. The management is a little harder because you have to manage two types of nodes. If there are more than two types of nodes, it is even harder to manage. For example, MongoDB has two types of nodes: routing nodes and data serving nodes. But a data serving node could be either a primary or a secondary. Primary is the only one who can take writes in a replication set, while a secondary may be able to serve read requests if a weaker consistency model is acceptable. You have to keep track of which one is primary or secondary in order to reason about the system behavior.

Latency vs. durability

There is a tradeoff between latency and durability. A data write can return super fast if it is only committed to memory, but a memory corruption can easily lose your data. Alternatively, you can wait for the data to be written to a local disk before returning. The latency is a little longer, but it is more durable. Or, you can wait for the data to be written into several disks across several nodes. The latency is definitely longer, but it is a lot more durable. Even if a single hard disk or node fails, you still have your data stored somewhere else.

MongoDB favors low latency. When writing, it returns to the caller without even waiting for the data to be synced to the disk. Although this behavior can be overwritten by the application developer by sending a “sync” command right after the write, this work around can really kill the performance. HBase also makes a tradeoff to favor low latency. It does not sync log updates to disk, so it can return to clients quickly. Cassandra is tunable, where a client can specify on a per-call basis whether the write should be persisted. PNUTS is on the other extreme, where it always sync log data to disk.

Read vs. write performance

There is also a tradeoff between read and write performance. When you write, you can write sequentially to the disk, which optimize the write latency, because a spinning hard disk is very good at sequential writes. The price you have to pay is in the read performance. Since data is written sequentially based on the order it was written in, rather than its index order, reading the data may require scanning through several data files to find the latest copy. On the other hand, you can pay for the price when writing the data, to make sure the data is written in the correct place or the data is indexed. You pay for a slower write, but the read performance will be higher because it is a simple lookup. HBase and Cassandra both optimize for write, whereas PNUTS is optimized for read. Amazon SimpleDB also optimizes for read. This is evident in its low write throughput (roughly 30 writes/second in our measurement) and high read throughput.

There is a side effect of optimizing for read. Because some data has to be written in place (either the index or the data), there is a possibility of corruption, which may make the later half of the file unreadable. You have to carefully look into the design to make sure there are no corner failure cases that can cause this to happen, or come up with a good backup and recovery plan.

Dynamic scaling

This is a key requirement in NoSQL data stores. You want to be able to grow and shrink your cluster size and its capacity on the fly by simply adding or removing nodes. Fortunately, most NoSQL stores we looked at support this capability, so the decision is easy.

Auto failover

If dynamic scaling is implemented robustly, auto failover comes for free because a node failure should be indistinguishable from decommissioning a node. Unfortunately, some data stores require you to explicitly decommission a node. A node failure, i.e., an unplanned decommissioning, could take some time to recover.

Auto load balancing

The load a machine experiences, both in terms of the amount of storage and the amount of read/write requests, may differ widely among the machines forming the storage cluster. The load may also fluctuate wildly over time. A single overloaded node may cause great disruption to the cluster, even if other nodes are lightly loaded. HBase, MongoDB, and PNUTS all support auto load balancing, while Riak only rebalances when nodes join and leave. If the data store does not support auto load balancing, you have to make sure to load the data evenly yourself. It may involve profiling your data, and/or tuning the configuration. For example, in Cassandra, you can choose RandomPartitioner, which tends to even out the load.

Another aspect of load balancing is around failure scenarios. If a node fails, how many other nodes are going to take over the workload for the failing node? You want to spread the load as even as possible, so that you do not overload another node and trigger a domino effect.  This is cited as one of the reasons Facebook choose HBase, because HBase spreads out the shards across machines.

Compression support

Storing data in a compressed format saves disk space. Because IO is often the limiting factor is today’s computer systems, it is always a good idea to tradeoff CPU for a reduction in the storage space. HBase supports compression, but unfortunately, many others, including Cassandra and MongoDB, do not (yet) support compression.

Range scan

Many applications require the ability to read out a chunk of sequential data based on a predefined order (typically the index order). It is convenient to specify a range and get all keys within that range, because you do not even need to know what keys are there to lookup. In addition, you can perform a range scan at a much higher performance than looking up each individual keys (even assuming you know all keys in the range).

BigTable stores data in lexicographical order; hence, it can easily support range scan. As a BigTable clone, HBase supports range scan. Even though only modeling after the BigTable data model, Cassandra also supports range scan with their OrderPreservingPartitioner. On the other hands, key-value stores, such as Riak, do not support range scan.

Failure scenarios

What failure scenario are you willing to tolerate? Many are implemented with a master/slave architecture. If the master goes down, the failure could be quiet dramatic. For example, Hypertable currently only has a single master (although there is plan to change it in the future), which is a single point of failure. Not only there is only a single master, but there also is only a single chubby node, so the master’s failure could be catastrophic. Other master/slave implementations have better plans to protect the master. There are often ways to recover the master gracefully. However, it means that the cluster could be gone for an extended period of time when recovering the master node. Fully distributed implementations, such as Riak and Cassandra, can tolerate failure much more gracefully. Because they are symmetric, a node failure typically means a degraded service, rather than a total failure.

Another aspect of failure handling that you have to look into is failure recovery time. In addition to the master node, when a data node goes down, it could take some time to recover. For example, BigTable has a single tablet server per range. If a tablet server is down, it has to be reconstructed from the DFS, when could take some time.

I have highlighted some dimensions that you need to think about when comparing the various NoSQL data stores. It is by no means exhaustive, but hopefully it is a good list to get your started.

When to switch to NoSQL?

It is often claimed that SQL cannot scale, and if you have a lot of data, it is better to use a NoSQL platform. But, as I am often asked, what is “a lot”, i.e., at what point should you start using NoSQL? Unfortunately, I do not think there is a clear answer, and there is a fairly wide transition zone where you could use either technologies.

You could scale a DBMS (DataBase Management System) pretty far by spending more engineering effort. Oracle have been optimizing their database for many years. Their Oracle RAC product can scale in a cluster environment. They also have specialized high-performance database products, such as the in-memory database (through acquisition of TimesTen) and the database appliance (Oracle Exadata).

Other vendors have been attacking the scaling problem using different approaches. For example, Greenplum and AsterData use a MapReduce engine to scale in a cluster environment. Vertica use a column-oriented data store. Netezza use hardware to scale.

The tradeoff lies in the cost to scale. The more you are willing to pay, the higher scale you typically get. It is hard to say fundamentally what is the limit of scaling a DBMS, because it not only depends on your application (e.g., the data access pattern), but it also depends on the DBMS’s system implementation. However, it is instructive to see what is the largest size people have been able to scale to.

The two largest publicly known DBMS clusters are:

  1. Ebay: A Teradata configuration with 72 nodes. Each node has two quad-core CPUs, 32GB RAM, 104 300GB disks). Manage a total of 2.4PB of relational data.
  2. Fox Interactive: A Greenplum configuration with 40 nodes. Each node is a Sun X4500, with two dual-core CPUs, 48 500GB disks, and 16GB RAM. The total disk space is 1PB.

As you can see, you can scale pretty far with DBMS as long as you are willing to pay. Few applications actually have peta-bytes of data. But if you are a Mom&Pop shop and you are using a free DBMS system, such as MySQL, on a commodity server, you will encounter the scaling limit much more quickly. That is when you need to consider a NoSQL platform. Fortunately, most NoSQL platforms are free, so you can switch over right away, although you do need to modify your application a bit :-(.