Host server CPU utilization in Amazon EC2 cloud

One potential benefit of using a public cloud, such as Amazon EC2, is that a cloud could be more efficient. In theory, a cloud can support many users, and it can potentially achieve a much higher server utilization through aggregating a large number of demands. But is it really the case in practice?  If you ask a cloud provider, they most likely would not tell you their CPU utilization. But this is a really good information to know. Besides settling the argument whether cloud is more efficient, it is very interesting from a research angle because it points out how much room we have in terms of improving server utilization.

To answer this question, we came up with a new technique that allows us to measure the CPU utilization in public clouds, such as Amazon EC2. The idea is that if a CPU is highly utilized, the CPU chip will get hot over time, and when the CPU is idle, it will be put into sleep mode more often, and thus, the chip will cool off over time. Obviously, we cannot just stick a thermometer into a cloud server, but luckily, most modern Intel and AMD CPUs are all equipped with an on-board thermal sensor already. Generally, there is one thermal sensor for each core (e.g., 4 sensors for a quad-core CPU) which can give us a pretty good picture of the chip temperature. In a couple of cloud providers, include Amazon EC2, we are able to successfully read these temperature sensors. To monitor CPU utilization, we launch a number of small probing virtual machines (also called instances in Amazon’s terminology), and we continuously monitor the temperature changes. Because of multi-tenancy, other virtual machines will be running on the same physical host. When those virtual machines use CPU, we will be able to observe temperature changes. Essentially, the probing virtual machine is monitoring all other virtual machines sitting on the same physical host. Of course, deducing CPU utilization from CPU temperature is non-trivial, but I won’t bore you with the technical details here. Instead, I refer interested readers to the research paper.

We have carried out the measurement methodology in Amazon EC2 using 30 probing instances (each runs on a separate physical host) for a whole week. Overall, the average CPU utilization is not as high as many have imagined. Among the servers we measured, the average CPU utilization in EC2 over the whole week is 7.3%. This is certainly lower than what an internal data center could achieve. In one of the virtualized internal data center we looked at, the average utilization is 26%, more than 3 times higher than what we observe in EC2.

Why is CPU utilization not higher? I believe it results from a key limitation of EC2, that is, EC2 caps the CPU allocation for any instance. Even if the underlying host has spare CPU capacity, EC2 would not allocate additional cycles to your instance. This is rational and necessary, because, as a public cloud provider, you must guarantee as much isolation as possible in a public infrastructure so that one greedy user could not make another nice user’s life miserable. However, the downsize of this limitation is that it is very difficult to increase the physical host’s CPU utilization. In order for the utilization to be high, all instances running on the same physical host have to use the CPU at the same time. This is often not possible. We have the first-hand experience of running a production web application in Amazon. We know we need the capacity at peak time, so we provisioned an m1.xlarge server. But we also know that we cannot use the allocated CPU 100% of the time. Unfortunately, we have no way of giving up the extra CPU so that other instances can use it. As a result, I am sure the underlying physical host is very underutilized.

One may argue that the instance’s owner should turn off the instance when s/he is not using it to free up resources, but in reality, because an instance is so cheap, people never turn it off. The following figure shows a physical host that we measured. The physical host gets busy consistently shortly before 7am UTC time (11pm PST) on Sunday through Thursday, and it stays busy for roughly 7 hours. The regularity has to come from the same instance, and given that the chance of landing a new instance on the same physical host is fairly low, you can be sure that the instance was on the whole time, even during the time it is not using the CPU. Our own experience with Steptacular — the production web application — also confirms that. We do not turn it off during off peak because there is so much state stored on the instance that it is big hassle to shut it down and turn it back on.


CPU utilization on one of the measured server


Compared to other cloud providers, Amazon does enjoy an advantage of having many customers; thus, it is in the best position to have a higher CPU utilization. The following figure shows the busiest physical host that we profiled. A couple of instances on this physical host are probably running a batch job, and they are very CPU hungry. On Monday, two or three of these instances get busy at the same time. As a result, the CPU utilization jumped really high. However, the overlapping period is only few hours during the week, and the average utilization come out to be only 16.9%. It is worth noting that this busiest host that we measured still has a lower CPU utilization than the average CPU utilization we observed in an internal data center.

CPU utilization of a busy EC2 server
You may walk away from this disappointed to know that public cloud does not have an efficiency advance. But, I think from a research stand point, this is actually a great news. It points out that there is a significant room to improve, and research in this direction can lead to a big impact on cloud provider’s bottom line.

Launch a new site in 3.5 weeks with Amazon

Getting started quick is one of the reasons that people adopted cloud, and that is why Amazon Web Services (AWS) is so popular. But people often overlook the fact that the retail part of Amazon is also amazing. If your project involves supply chain, you can also leverage Amazon retail to get up and running quickly.

We recently launched a wellness pilot project at Accenture where we leveraged both Amazon retail and Amazon web services. The Steptacular pilot is designed to encourage Accenture US employees to lead a healthy lifestyle. We all had our new year resolutions, but we always procrastinate, and we never exercise as much as we should. Why? Because there is a lack of motivation and engagement. The Steptacular pilot uses a pedometer to track a participant’s physical activity, then it leverages concepts in Gamification, uses social incentive (peer pressure) and monetary incentive to constantly engage participants. I will talk about the pilot and its results in details in a future post, but in this post, let me share how we are able to launch within 3.5 weeks, the key capabilities we leveraged from Amazon and some lessons we learned from this experience.

Supply chain side

The Steptacular pilot requires participants to carry a pedometer to track their physical activity. This is the first step of increasing engagement — using technology to alleviate the hassle of manual (and inaccurate) entry. We quickly locked into the Omron HJ-720 model because it is low cost and it has a USB connector so that we can automate the step upload process.

We got in touch with Omron. The guys at Omron are super nice. Once they learned what we are trying to do, they immediately approved us as a reseller. That means we can buy pedometer at the wholesale price. Unfortunately, we still have to figure out how we can get the devices into our participants’ hands. Accenture is a distributed organization with 42 offices in the US alone. To make the matter worse, many consultants work from client sites, so it is not feasible to distribute in person. We seriously considered three options:

  1. Ask our participants to order directly from Amazon. This is the solution we chose in the end, after connecting with the Amazon buyer in charge of the Omron pedometer and being assured that they will have no problem handling the volume. It turns out that this not only saves us a significant amount of shipping hassle, but it is also very cost effective for our participants.
  2. Be a vendor ourselves and uses Amazon for supply chain. Although I did not know about it before, I am pleasantly surprised to learn about the Fulfillment by Amazon capability. This is Amazon’s cloud for supply chain. Like a cloud, this is provided as a service — you store your merchandise in Amazon’s warehouse, and they handle the inventory and shipping. Also, like a cloud, it is pay per use with no long term commitment. Although equally good at reducing hassle for us, we did not find that we can save cost. Amazon retail is so efficient and has such a small margin that we realize we cannot compete even though we are happy with a 0% margin and even though we (supposedly) pay for the same wholesale price.
  3. Ship and manage by ourselves. The only way we could be cheaper is if we manage the supply chain and shipping logistics ourselves, and of course, this is assuming that we work for free. However, the amount of work is huge, and none of us wants to lick envelope for a few weeks, definitely not for free.

The pilot officially launched on Mar. 29th. Besides Amazon itself, another Amazon affiliate, J&R music, also sells the same pedometer on Amazon’s website. Within a few minutes, our participants were able to totally drain J&R’s stock. However, Amazon remained in stock for the whole duration. Within a week, they sold roughly 3,000 pedometers pedometers. I am sure J&R is still mystified by the sudden surge in demand. If you are from J&R, my apologies for not giving adequate warning ahead and kudos to you for not overcommitting your stock like many TouchPad vendors did recently (I am one of those burned by OnSale).

In addition to managing device distribution, we also have to worry about how to subsidize our participants. Our sponsors agreed to subsidize each pedometer by $10 to ease the adoption, but we could not just write each participant a $10 check — that is too much work. Again, Amazon came to the rescue. There are two options. One is that Amazon could generate a bunch of one-time-use $10 discount code which is specifically tied to the pedometer product, then, based on how many are redeemed, Amazon could bill us for the total cost. The other option is that we could buy a bunch of $10 gift cards in bulk and distribute to our participants electronically. We ultimately chose the gift card option for its flexibility and also for the fact that it is not considered a discount so that the device would still cost more than $25 for our participants to qualify for super saver shipping. Looking back, I do regret choosing the gift card option, because managing squatters turns out to be a big hassle, but that is not Amazon’s fault, it is just human nature.

Technology platform side

It is a no-brainer to use Amazon to leverage its scaling capabilities, especially for a short-term quick project like ours. One key thing we learned from this experience is that you should only use what you need. Amazon web services offer a wide range of services, all designed for scale, so it is likely that you will find a service that serves your need.

Take for example the email service Amazon provides. Initially, we used Gmail for sending out signup confirmations and email notifications. During the initial scaling trial, we soon hit Gmail’s limit on how fast we can send emails. Once realizing the problem, we quickly switched to Amazon SES (Simple Email Service). There is an initial cap on how many we can send, but it only took a couple of emails for us to lift the limit. With a couple of hours of coding and testing, we all of a sudden can send thousands of emails at once.

In addition to SES, we also leveraged AWS’ CloudWatch service to enable us to closely monitor and be alerted of system failures. Best of all, it all comes for free without any development effort from our side.

Even though Amazon web services offer a large array of services, you should only choose what you absolutely need. In other words, do not over engineer. Let us taking auto scaling as an example. If you host a website in Amazon, it is natural to think about putting in an auto-scaling solution, just in case to handle the unexpected. Amazon has its auto scaling solution, and we, at the Accenture labs, have even developed an auto-scaling solution called WebScalar in the past. If you are Netflix, it makes absolute sense to do so because your traffic is huge and it fluctuates widely. But if you are smaller, you may not need to scale beyond a single instance. If you do not need it, it is extra complexity that you do not want to deal with especially when you want to launch quick. We estimated that we will have around 4,000 participants, and when we did a quick profiling, we figured that a standard extra-large instance in Amazon would be adequate to handle the load. Sure enough, even though the website experienced a slow down for a short period of time during launch, it remains adequate to handle the traffic for the whole duration of the pilot.

We also learned a lesson on fault tolerance — really think through your backup solution. Steptacular survived two large-scale failures in the US East data center. We enjoyed peace of mind partly because we are lucky, partly because we have a plan. Steptacular uses an instance-store instance (instead of an EBS instance). We made the choice mainly for performance reasons — we want to free up the network bandwidth and leverage the local hard disk bandwidth. This turns out to have saved us from the first failure in Apr. which is caused by EBS blocks failure. Even though we cannot count on EBS for persistency, we build in our own solution. Most static content on the instance is bundled into a Amazon Machine Image (AMI). There are two pieces of less static content (the content that changes often) stored on the instance: the website logic and the steps database. The website logic is stored in a Subversion repository and the database is synced to another database running outside of the US East data center. This architecture allows us to be back up and running quickly, by first launching our AMI, then check out website code from repository and lastly dump and reload the database from the mirror. Even though we did not have to initiate this backup procedure, it is good to have the peace of mind knowing your data is safe.

Thanks to Amazon, both Amazon retail and Amazon web services, we are able to pull off the pilot in 3.5 weeks. More importantly, the pilot itself has collected some interesting results on how we can motivate people to exercise more. But I will leave that to a future post after we have a chance to dig deep into the data.


Launching Steptacular in 3.5 weeks would not have been possible without the help of many people. We would like to especially thank the following folks:

  • Jim Li from Omron for providing both hardware, software and logistics support
  • Jeff Barr from Amazon for connecting us with the right folks at Amazon retail
  • James Hamilton from Amazon for increasing our email limit on the spot
  • Charles Allen from Amazon for getting us the gift codes quickly
  • Tiffany Morley and Helen Shen from Amazon for managing the inventory so that the pedometer miraculously stayed in stock despite the huge demand

Last but not least, big kudos to the Steptacular team, which includes several Stanford students, who worked really hard even through the finals week to get the pilot up and running. They are one of the best team I proudly have ever worked with.

How to run MapReduce in Amazon EC2 spot market

If you often run large-scale MapReduce/Hadoop jobs in Amazon EC2, you must have thought about using the spot market. EC2’s spot market price for a spot instance is typically 60+% less than that of an on-demand instance. For a large job, where you use many instances for many hours, a 60+% saving could be a substantial amount.

Unfortunately, using spot market has not been trivial. In exchange for the lower price, Amazon has your explicit agreement that they can terminate you at any time. This is a problem since you may lose all your work. A research paper from HotCloud last year showed that even adding more spot instances (not replacing existing nodes) could be detrimental to a running MapReduce job. In other words, you add more resources to your cluster, but your running time could actually be longer.

Beyond lengthening your computation, spot market could even make you lose your data. Existing MapReduce implementations, such as Google’s internal implementation or Hadoop, are designed with failure in mind already. However, the assumed scenario is a hardware failure, i.e., a small fraction of nodes may go down at any time. This assumption is not true in the spot market environment, where all nodes of a cluster may fail at the same time. You not only can lose all your states (when the master nodes go down), but you can also lose all your data (when nodes holding replicas for a piece of data all go down).

What about bidding for a really high price for your spot instances, and hoping that Amazon never increases the price that high? Unfortunately there is no guarantee on how high the spot market price could be. There are several occasions last year where the spot instances price actually exceeded the on-demand instances. This is likely because some guys were bidding at a high-than-on-demand-instance price, and Amazon really needed to kill those instances to free up capacity.

While the naive approach of bidding at a high price may not work, I am happy to report that there is a new technique that can help you leverage spot market to save money. We recently developed a MapReduce implementation that could tolerate large-scale node failures (e.g., when your bid price is below Amazon’s spot price). Even if all nodes in your cluster are terminated, we can guarantee that no state is lost, and that you can continue make forward progress when your cluster comes back online (e.g., when your bid price is higher than Amazon’s spot price).

Our implementation leverages two key things. First, when Amazon terminates your instance, it is not a hard power off. Instead, it is a soft OS shutdown, where you have a couple of minutes to execute your shutdown script. We modified our shutdown script where we save the current progress and generate a new task for the remaining work so that another node can take over in the future. In other words, we use on-demand checkpointing to save states only when needed.

Second, we constantly save intermediate data in order to minimize the volume of state we have to save in the shutdown phase. Our solution is built on Cloud MapReduce, which constantly streams intermediate data out of the local node. In comparison, other MapReduce implementations, such as Hadoop, save all intermediate data locally before a task finishes. This could result in too large a dataset to save during the short shutdown window.

I would not belabor the details of our implementation, except mentioning that it was published last week at USENIX HotCloud conference. You can read the Spot Cloud MapReduce paper for the full details.

Terremark announces new lower price

Competition is a good thing. Continuous drop in the hardware price is a good thing. Improvement in efficient is also a good thing. All those should translate into a lower computing cost to you — the end user — over time. That is exactly what Terremark has done today — lowering its cloud offering price, in one case, up to 42% lower. We compared Terremark’s price to EC2 before, it is time to update the comparison based on the new pricing.

Following what we did to get EC2’s unit cost, we can run regression analysis to estimate Terremark’s unit cost. We only consider Linux VMs, in order not to factor is software license cost. We assume the Cost = c * CPU + m * RAM (Terremark charges storage separately from the VM cost at $0.25/GB/month). The regression determines the unit cost to be

c = 1.31 cents/VPU/hour
m = 5.38 cents/GB/hour

Not surprisingly, both unit costs are lower than their previous price (c used to be 2.06 cents/VPU/hour, and m used to be 6.46 cents/GB/hour).

The regression result still does not fit the real cost very well. Terremark offers economy-of-scale in its cost model, where it heavily discounts both CPU and RAM as you move up in the configuration. The following table shows both the newer reduced cost (color green) and the cost as determined by the estimated parameters (color red) for the various VM configurations.

memory (GB)\CPU 1 VPU 2 VPU 4 VPU 8 VPU
0.5 3.5 / 4 4 / 5.31 4.5 / 7.93 4.9/ 13.17
1 6 / 6.69 7 / 8 8 / 10.62 10 / 15.86
1.5 9 / 9.38 10.5 / 10.69 12 / 13.31 13.5 / 18.55
2 12 / 12.07 14.1 / 13.38 16.1 / 16 20 / 21.24
4 21.7 / 22.82 27.1 / 24.13 30.1 / 26.75 35.9 / 31.99
8 40.1 / 44.33 48.2 / 45.64 56.7 / 48.26 63.4 / 53.5
12 60.2 / 65.85 68.6 / 67.16 76.2 / 69.78 82.4 / 75.02
16 80.3 / 87.36 84.4 / 88.67 89.9 / 91.29 93.2 / 96.53

Again, we compare cost by comparing with a fictitious EC2 instance with the exact same spec. For simplicity, we assume a VPU in a Terremark’s VM can get the full attention of a physical core. This is a more common case because Terremark uses VMWare’s DRS (Distributed Resource Scheduler), which can dynamically reassign virtual cores to a different physical core to avoid contention.

The following table shows the EC2 equivalent cost assuming a virtual core can get the full power of the physical core.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 4.09 0.86
0.5 2 4 7.17 0.56
0.5 4 4.5 13.33 0.34
0.5 8 4.9 25.65 0.19
1 1 6 5.09 1.18
1 2 7 8.17 0.86
1 4 8 14.33 0.56
1 8 10 26.66 0.38
1.5 1 9 6.1 1.48
1.5 2 10.5 9.18 1.14
1.5 4 12 15.34 0.78
1.5 8 13.5 27.66 0.49
2 1 12 7.1 1.69
2 2 14.1 10.18 1.38
2 4 16.1 16.35 0.98
2 8 20 28.67 0.7
4 1 21.7 11.12 1.95
4 2 27.1 14.21 1.91
4 4 30.1 20.37 1.48
4 8 35.9 32.69 1.1
8 1 40.1 19.17 2.09
8 2 48.2 22.25 2.17
8 4 56.7 28.41 2.0
8 8 63.4 40.73 1.56
12 1 60.2 27.21 2.21
12 2 68.6 30.29 2.26
12 4 76.2 36.45 2.09
12 8 82.4 48.78 1.69
16 1 80.3 35.25 2.28
16 2 84.4 38.33 2.20
16 4 89.9 44.5 2.02
16 8 93.2 56.82 1.64

Like we observed before, there are several configurations where Terremark is much cheaper than EC2. The 8VPU+0.5GB configuration is still the cheapest at 19% of the equivalent EC2 cost. What is different from before is that the larger VM configurations are getting significantly cheaper. For example, the 16GB+8VPU configuration costs only 1.64 over its EC2 equivalent, compared to a ratio of 2.83 before. This means that it is getting more economical to run larger VMs in Terremark.

Let us hope the trend continues that cloud providers continue to reduce the cost of computing so that we can pay less for the same work or get more work done for the same budget.

EC2 spot pricing coming to Cluster Compute and Cluster GPU instances

AWS EC2 introduced the Cluster Compute and Cluster GPU instances few months back. Those instances are very good for high-throughput HPC applications, unfortunately, they have not been available in the spot market. I think that is about to change, the Cluster Compute and Cluster GPU instances should be available through the spot market shortly.

Over the weekend (Feb. 6, 2011), I have noticed that both cc1.4xlarge and cg1.4xlarge instances are showing up on AWS’s console when you query the price history. But that went away quickly the next day. I am guessing they were testing the features. Starting today (Feb. 8, 2011), you can query AWS API or use the AWS command line tool to see the price history for cc1.4xlarge instances. Furthermore, you can start bidding for cc1.4xlarge already (it was not possible over the weekend because the “current price” is not set for those instances).

Although there is no official announcement yet, I think it is just a matter of time. Have fun crunching through your HPC applications for cheap.



Comparing cloud providers on VM cost

How do you compare two IaaS clouds? Is Amazon EC2’s small standard instance (1 ECU, 1.7GB RAM, 160GB storage) cheaper or is Rackspace cloud’s 256MB server (4 cores, 256MB RAM, 10GB storage) cheaper? It is obviously simpler to compare them if you focus only on one metric. For example, let us assume your application is CPU bound and it does not require much memory at all. Then you should focus solely on the CPU power a cloud VM gives you. We have translated GoGrid, Rackspace, and Terremark‘s VM configurations into their equivalent ECU, so you can simply take a ratio between the cost and the ECU rating and pick the lowest ratio. Unfortunately, real-life applications are never that simple. They demand CPU cycle, memory, as well as hard disk storage capacity. So, how do you compare apple-to-apple?

The methodology

Since no methodology exists yet, we will propose one. Since the comparison results depend highly on the methodology chosen, we first will spell out the methodology we use so that if you have a different one and you come up with a different result, you can trace the source of the difference. If you see areas where we can improve the methodology, please do leave a comment. The methodology works as follows:

  1. We first break down the cost components in Amazon EC2. We assume Amazon has priced their instances using a linear model, i.e., the cost is equal to c * CPU + m * Mem + s * Storage, where c is the unit cost of CPU per ECU per hour, m is the unit cost of memory per GB per hour, and s is the unit cost of storage per GB per hour. Amazon provides several types of instances, each with a different combination of CPU, memory and storage, which is enough of a hint for us to use regression analysis to estimate c, m and s. The details are in our ECU cost breakdown analysis.
  2. Once we have the unit cost in EC2, we can compare it with another cloud provider. We take one VM configuration from a cloud provider at a time, we then compute what Amazon EC2 would charge for an instance with the exact same specification if EC2 were to offer it. This can be easily done by multiplying the EC2 unit costs (c, m, and s) with the amount of CPU, RAM, and storage in the VM, and add them up. Of course, this is hypothetical, because EC2 does not offer an instance with an exact same spec. So even if the EC2 price is lower, you cannot just buy a substitute from Amazon. However, this gives us a good sense of the relative cost.

We have done the analysis with GoGrid, Rackspace, and Terremark.

We can compute a ratio between a cloud VM’s cost with its hypothetical equivalent in EC2. The following lists the top few VMs that have the lowest ratio. If you are curious about the ratio for other VM configurations, feel free to dig into the individual posts on each provider. The ratio listed is assuming that you will get the maximum CPU allowed under bursting, which is frequently the case in those cloud providers. Further, the ratio listed is comparing with EC2 N. Virginia data center. Other EC2 data centers have a higher cost.

Provider RAM (GB) CPU (cores) storage (GB) cost ratio with an equivalent in EC2
Rackspace 0.25 4 10 0.168
Terremark 0.5 8 charged separately at $0.25/month/GB 0.19
Rackspace 0.5 4 20 0.314
Terremark 0.5 4 charged separately at $0.25/month/GB 0.338
Terremark 1 8 charged separately at $0.25/month/GB 0.375
Terremark 1.5 8 charged separately at $0.25/month/GB 0.491


How to use this data?

Due to the limitations of this methodology (comparing with a hypothetical equivalent in EC2), it only makes sense if one of the cloud provider you are comparing is Amazon EC2. In other words, do not compare Rackspace with Terremark based on the ratio.

It also makes no sense to use our results if you know the exact specification for your server. In that case, you should find a minimum VM configuration that is just barely bigger than your requirement and compare price.

Our results are useful if your application is flexible. For example, instead of using one m1.small instance in EC2, you could use several Rackspace 256MB VMs to achieve a dramatic cost savings. Examples of a flexible application include a batch application, such as a MapReduce job, which could be chopped down to a finer granularity. Another example could be web servers in a web server farm, where the load balancer can divide up the work to take advantage of whatever computation capacity provisioned on the web server.

Our results are also useful if you want to get a high level overview. Consider an enterprise purchaser who wants to choose a cloud platform. There are many dimensions he has to consider, e.g., features, cost, SLA, contract terms….. Doing a deep analysis at the beginning is just going to be overwhelming. Since Amazon is a big player in cloud, it most likely will be part of the evaluation. Having a ratio would give a ten-thousand-feet view such that the decision maker would know whether an alternative cloud would save him money. Then, as the evaluation progresses, he can dig deeper into a finer comparison.


There are many caveats in using our results that we should spell out.

  • This is only comparing a VM cost, including its CPU, memory and storage. But, it does not include other costs, such as bandwidth transfers. The bandwidth cost varies wildly, for example, GoGrid offers free inbound traffic, which can translate into a significant cost saving.
  • When we compare CPUs, we are only comparing their processing power, not their IO capabilities (both disk and network IO). In Amazon, we sometimes observe degraded IO performance, possibly due to competing VMs on the same host. It is a sad side effect of using popular cloud offerings.
  • As we mentioned, this only applies to fungible applications that can take full advantage of provisioned CPU, memory and storage resources. For example, if you cannot take advantage of the provisioned RAM, it does not matter if it is a good deal. You are wasting the memory, and you may be better off with a VM configuration from a different cloud provider with a smaller provisioned RAM.
  • This is not a substitute for feature comparisons. For example, GoGrid offers free F5 hardware load balancer. If you need a hardware load balancer, you should consider that separately.

Terremark cost comparison with Amazon EC2

(Earlier posts in this series are: EC2 cost break down, GoGrid & EC2 cost comparison, Rackspace & EC2 cost comparison)

In this post, let us compare the VM cost between Terremark vCloud express and Amazon EC2. Terremark is one of the first cloud providers based on VMWare technology. Unlike EC2, Rackspace and GoGrid, which use Xen as the hypervisor, Terremark uses VMWare’s ESX hypervisor, which arguably is richer in functionality.

Following the methodology we have used so far, we need to first understand Terremark’s hardware infrastructure and its resource allocation policy. Using the same technique we used for EC2’s hardware analysis, we determine that Terremark runs on a platform with two sockets of Quad-core AMD Opteron 8389 processors. PassMark does not have a benchmark result for this processor, so we have to run the benchmark ourselves. We used the 16GB+8VPU configuration — its largest — to minimize interference from other VMs, and we run it multiple times late at night to ensure that we are indeed measuring the underlying hardware’s capacity. On average, the PassMark CPU mark result is 7100, which is roughly 18 ECU.

Terremark uses the ESX hypervisor’s default policy for scheduling CPU, i.e., a core shares the CPU equally with another core regardless of how much memory the VM has. This is different from GoGrid and Rackspace where the CPU is shared proportional to the amount of RAM a VM has. The scheduling policy can be verified by reading the GuestSDK API exposed by VMTools. By reading the API, we know that a VM not only has no minimum guaranteed CPU, but it also does not have a maximum burst limit. Each virtual core of a VM is assigned a CPU share of 1000, regardless of the memory it is allocated. Thus, the more cores a VM has, the more shares of the CPU it will get (e.g., 1VPU has 1000 shares, and 8VPU has 8000 shares).

It is difficult to determine how many VMs could be on a physical host, which determines the minimum guaranteed CPU. We are told in their forum that each physical host has 128GB of memory, which can accommodate at least 8 VMs, for example, each with 8 VPU+16GB RAM (its largest configuration). VMWare ESX hypervisor allows over-committing memory, so in theory, there could be many more VMs on a host. When we launched a vanilla 512MB VM, we learned from the Guest API that our VM only occupied 148MB RAM. Clearly, there is lots of room to over-commit, even though we see no evidence that they are doing so. Assuming there is no over-commitment, there still could be a lot of VMs competing for the CPU. In the worst case, all VMs on the host have 512MB RAM and 8VPU, which consume the least memory, but gain the maximum CPU weights. A physical host can host 256 such VMs, leaving a negligible CPU share for each VM. If a VM has only one core, it owns only 1/(8*256) share of the CPU, and an 8 VPU (8 virtual cores) VM owns only 1/256 share of the CPU.

Following what we did to get EC2’s unit cost, we can run regression analysis to estimate Terremark’s unit cost. We assume the Cost = c * CPU + m * RAM (Terremark charges storage separately from the VM cost at $0.25/GB/month). The regression determines the unit cost to be

c = 2.06 cents/VPU/hour
m = 6.46 cents/GB/hour

The regression result does not fit the real cost very well. The following table shows both the original cost (color green) and the cost as determined by the estimated parameters (color red) for the various VM configurations.

memory (GB)\CPU 1 VPU 2 VPU 4 VPU 8 VPU
0.5 3.5 / 5.29 4 / 7.36 4.5 / 11.48 5 / 19.72
1 6 / 8.53 7 / 10.6 8 / 14.7 10 / 23
1.5 9 / 11.8 10.5 / 13.8 12 / 17.9 13.6 / 26.2
2 12 / 15 14.1 / 17 16.1 / 21.2 20.1 / 29.4
4 24.1 / 27.9 28.1 / 30 30.1 / 34.1 40.2 / 42.4
8 40.2 / 53.8 48.2 / 55.8 60.2 / 60 80.3 / 68.2
12 60.2 / 79.6 72.3 / 81.7 90.3 / 85.8 120.5 / 94.1
16 80.3 / 105.5 96.4 / 107.5 120.5 / 111.7 160.6 / 112

The reason that the regression analysis does not work well here is that Terremark heavily discounts both CPU and RAM as you move up in the configuration. Our linear model does not capture the economy of scale very well. However, we can think of the linear regression as a trend line, and the trend line indicates that Terremark is likely more expensive than EC2. For example, it costs 6.46 cents/GB/hour for its RAM, which is much higher than the 2.01 cents Amazon values its RAM at.

Another way to compare cost is to use EC2’s unit cost to figure out what an equivalent configuration will cost in EC2. The following table shows the cost comparison where we assume you can only get the minimum CPU at the worst case, where all other VMs are busy and a physical host is fully loaded with 8VPU+0.5GB VMs (without over-commitment). Each row shows the RAM and CPU configuration, Terremark’s price, what it would cost in EC2, and the ratio between Terremark and EC2 cost.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 1.02 3.44
0.5 2 4 1.03 3.89
0.5 4 4.5 1.05 4.27
0.5 8 5 1.10 4.54
1 1 6 2.02 2.97
1 2 7 2.03 3.44
1 4 8 2.06 3.89
1 8 10 2.11 4.75
1.5 1 9 3.03 2.97
1.5 2 10.5 3.04 3.45
1.5 4 12 3.06 3.92
1.5 8 13.6 3.11 4.37
2 1 12 4.03 2.98
2 2 14.1 4.05 3.49
2 4 16.1 4.07 3.96
2 8 20.1 4.12 4.88
4 1 24.1 8.06 2.99
4 2 28.1 8.07 3.48
4 4 30.1 8.09 3.72
4 8 40.2 8.14 4.94
8 1 40.2 16.1 2.5
8 2 48.2 16.11 2.99
8 4 60.2 16.13 3.73
8 8 80.3 16.18 4.96
12 1 60.2 24.14 2.49
12 2 72.3 24.15 2.99
12 4 90.3 24.18 3.73
12 8 120.5 24.23 4.97
16 1 80.3 32.18 2.49
16 2 96.4 32.2 2.99
16 4 120.5 32.22 3.74
16 8 160.6 32.27 4.98

The table shows that Terremark is 2.49 to 4.98 times more expensive than an equivalent in EC2. This is mainly due to the way Terremark shares CPUs. A 0.5GB VM in Terremark shares the CPU equally with a 16GB VM; thus, in the worst case, a VM may get very little CPU. Since Terremark does not set a minimum guarantee on the CPU share in the hypervisor, we have to assume the worst case.

In reality, you are unlikely to encounter the worst case, and you are very likely to get the full attention of a physical core. The reason is not only because the majority of VMs have more than 0.5GB (so that you can pack fewer of them on a host), but also because Terremark uses VMWare’s DRS (Distributed Resource Scheduler). We have noticed that, when we drive up the load on our VMs, our VMs are often moved (through VMotion) to a different host, presumably to avoid contention. Thus, unless the whole cluster gets really busy, it is unlikely that your VM would have a lot of other busy VMs to contend with on the same host. The following table shows the EC2 equivalent cost assuming a virtual core can get the full power of the physical core.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 4.09 0.86
0.5 2 4 7.17 0.56
0.5 4 4.5 13.33 0.34
0.5 8 5 25.65 0.19
1 1 6 5.09 1.18
1 2 7 8.17 0.86
1 4 8 14.33 0.56
1 8 10 26.66 0.38
1.5 1 9 6.1 1.48
1.5 2 10.5 9.18 1.14
1.5 4 12 15.34 0.78
1.5 8 13.6 27.66 0.49
2 1 12 7.1 1.69
2 2 14.1 10.18 1.38
2 4 16.1 16.35 0.98
2 8 20.1 28.67 0.7
4 1 24.1 11.12 2.17
4 2 28.1 14.21 1.98
4 4 30.1 20.37 1.48
4 8 40.2 32.69 1.23
8 1 40.2 19.17 2.1
8 2 48.2 22.25 2.17
8 4 60.2 28.41 2.12
8 8 80.3 40.73 1.97
12 1 60.2 27.21 2.21
12 2 72.3 30.29 2.39
12 4 90.3 36.45 2.48
12 8 120.5 48.78 2.47
16 1 80.3 35.25 2.28
16 2 96.4 38.33 2.51
16 4 120.5 44.5 2.71
16 8 160.6 56.82 2.83

There are several configurations where Terremark is much cheaper than EC2. The 8VPU+0.5GB configuration is the cheapest at 19% of the equivalent EC2 cost. This is due to two reasons. First, the 8 VPU has more scheduling weight, and it can compete for the full power of the physical host. Second, the RAM is the smallest. As we have seen, Terremark values RAM more than EC2 does (m=6.46 cents/GB/hour vs. EC2 m=2.01 cents/GB/hour), so the less RAM a configuration has, the lower the cost. The cost savings go away as you add more RAM and more CPU to the configuration.