Amazon data center size

(Edit 3/16/2012: I am surprised that this post is picked up by a lot of media outlets. Given the strong interest, I want to emphasize what is measured and what is derived. The # of server racks in EC2 is what I am directly observing. By assuming 64 physical servers in a rack, I can derive the rough server count. But remember this is an *assumption*. Check the comments below that some think that AWS uses 1U server, others think that AWS is less dense. Obviously, using a different assumption, the estimated server number would be different. For example, if a credible source tells you that AWS uses 36 1U servers in each rack, the number of servers would be 255,600. An additional note: please visit my disclaimer page. This is a personal blog, only represents my personal opinion, not my employer’s.)

Similar to the EC2 CPU utilization rate, another piece of secret information Amazon will never share with you is the size of their data center. But it is really informative if we can get a glimpse, because Amazon is clearly a leader in this space, and their growth rate would be a great indicator of how well the cloud industry is doing.

Although Amazon would never tell you, I have figured out a way to probe for its size. There have been early guesstimates on how big Amazon cloud is, and there are even tricks to figure out how many virtual machines are started in EC2, but this is the first time anyone can estimate the real size of Amazon EC2.

The methodology is fully documented below for those inquisitive minds. If you are one of them, read it through and feel free to point out if there are any flaws in the methodology. But for those of you who just want to know the numbers: Amazon has a pretty impressive infrastructure. The following table shows the number of server racks and physical servers each of Amazon’s data centers has, as of Mar. 12, 2012. The column on server racks is what I directly probed (see the methodology below), and the column on number of servers is derived by assuming there are 64 blade servers in each rack.

data center\size # of server racks # of blade servers
US East (Virginia) 5,030 321,920
US West (Oregon) 41 2,624
US West (N. California) 630 40,320
EU West (Ireland) 814 52,096
AP Northeast (Japan) 314 20,096
AP Southeast (Singapore) 246 15,744
SA East (Sao Paulo) 25 1,600
Total 7,100 454,400

The first key observation is that Amazon now has close to half a million servers, which is quite impressive. The other observation is that the US east data center, being the first data center, is much bigger. What it means is that it is hard to compete with Amazon on scale in the US, but in other regions, the entry barrier is lower. For example, Sao Paulo has only 25 racks of servers.

I also show the growth rate of Amazon’s infrastructure for the past 6 months below. I only collected data for the US east data center because it is the largest, and the most popular data center. The Y axis shows the number of server racks in the US east data center.

EC2 US east data center growth in the number of server racks

Besides their size, the growth rate is also pretty impressive. The US east data center has been adding roughly 110 racks of servers each month. The growth rate looks roughly linear, although recently it is showing signs of slowing down.

Probing methodology

Figuring out EC2′ size is not trivial. Part of the reason is that EC2 provides you with virtual machines and it is difficult to know how many virtual machines are active on a physical host. Thus, even if we can determine how many virtual machines are there, we still cannot figure out the number of physical servers. Instead of focusing on how many servers are there, our methodology probes for the number of server racks out there.

It may sound harder to probe for the number of server racks. Luckily, EC2 uses a regular pattern of IP address assignment, which can be exploited to correlate with server racks. I noticed the pattern by looking at a lot of instances I launched over time and running traceroutes between my instances.  The pattern is as follows:

  • Each EC2 instance is assigned an internal IP address in the form of 10.x.x.x.
  • Each server rack is assigned a 10.x.x.x/22 IP address range, i.e., all virtual machines running on that server rack will have the same 22 bits IP prefix.
  • A 10.x.x.x/22 IP address range has 1024 IP addresses, but the first 256 are reserved for DOM0 virtual machines (system management virtual machine in XEN), and only the last 768 are used for customers’ instances.
  • Within the first 256 addresses, two at address 10.x.x.2 and 10.x.x.3 are reserved for routers on the rack. These two routers are arranged in a load balanced and fault-tolerant configuration to route traffic in and out of the rack. I verified that the uplink capacity from 10.x.x.2 and 10.x.x.3 are roughly 2 Gbps total, further suggesting that they are routers each with a 1Gbps uplink.

Understanding the pattern allows us to deduce how many racks are there. In particular, if we know a virtual machine at a certain internal IP address (e.g.,, then we know there is a rack using the /22 address range (e.g., a rack at 10.2.12.x/22). If we take this to the extreme where we know the IP address of at least one virtual machine on each rack, then we can see all racks in EC2.

So how can we know the IP addresses of a large number of virtual machines? You can certainly launch a large number of virtual machines and record the internal IP addresses that you get, but that is going to be costly. If you are RightScale, where a large number of instances are launched through your service, you may not be able to take this approach. Another approach is to scan the whole IP address space and watch when an instance responds back to a ping. There are several problems with this approach. First, it may be considered port scanning, which is a violation of AWS’s policy. Second, not all live instances respond to ping, especially with AWS’ security group blocking all ports by default. Lastly, the whole IP address space in 10.x.x.x is huge, which would take a considerable amount of time to scan.

While you may be discouraged at this point, it turns out there is another way. In addition to the internal IP address we talked about, each AWS instance also has an external IP address. Although we cannot scan the external IP addresses either (so as not to violate the port scanning policy), we can leverage DNS translation to figure out the internal IP addresses. If you query DNS for an EC2 instance’s public DNS name from inside EC2, the DNS server will return its internal IP address (if you query it from outside of EC2, you will get the external IP instead). So, all we are left to do is to get a large number of EC2 instances’ public DNS names. Luckily, we can easily derive the list of public DNS names, because EC2 instances’ public DNS names are directly tied to their external IP addresses. An instance at external IP address x.y.z.w (e.g., will have a public DNS name ec2-x-y-z-w… (e.g., if in US east data center). To enumerate all public DNS names, we just have to find out all public IP addresses. Again, this is easy to do because EC2 publishs all public IP addresses they use here.

Once we determined the number of server racks, we just multiply it by the number of physical servers on the rack. Unfortunately, we do not know how many physical servers are on each rack, so we have to make assumptions. I assume Amazon has dense racks, each rack has 4 10U chassis, and each chassis holds 16 blades for a total of 64 blades/rack.

Let us recap how we can find all server racks.

  • Enumerate all public IP addresses EC2 uses
  • Translate a public IP address to its public DNS name (e.g.,
  • Run a DNS query inside EC2 to get its internal IP address (e.g.,
  • Derive the rack’s IP range from the internal IP address (e.g., 10.2.12.x/22).
  • Count how many unique racks we have seen, then multiple it by the number of physical servers in a rack (I assume it is 64 servers/rack).


Even though my methodology could provide insights that are never possible before, it has its shortcomings, which could lead to inaccurate results. The limitations are:

  • The methodology requires an active instance on a rack for the rack to be observed. If the rack has no instances running on it, we cannot count it.
  • We cannot know how many physical servers are in a rack. I assume Amazon has dense racks, each rack has 4 10U chassis, and each chassis holds 16 blades.
  • My methodology cannot tell whether the racks I observe are for EC2 only. It could be possible that other AWS services (such as S3, SQS, SimpleDB) run on virtual servers on the same set of racks. It it also possible that they run on dedicated racks, in which case, AWS is bigger than what I can observe. So, what I am observing is only a lower bound on the size of AWS.

97 Responses to Amazon data center size

  1. Cloud Guy says:

    Thought of another caveat. VPC instances are not issued public IP addresses. So this is certainly an under-estimate for even just the servers running the EC2 service!

  2. Pingback: Quora

  3. Dave says:

    “A 10.x.x.x/10 IP address range has 1024 IP addresses” – I think you meant 10.x.x.x/22? A /10 means subnet mask, which is over 4 million IP addresses…think you mean /22 which is mask = 1024 IPs.

  4. This does seem very high. AWS has about 1m publicly routable IP addresses, which would imply they run 2 VMs per server with these numbers. So either their VM/server ratio is incredibly low, or there’s an error. A couple of thoughts:

    1) Assuming that there’s one 26-bit IP address range per rack; and
    2) Assuming 64 servers / rack.

    An error in either one of these makes a big difference in numbers.

    • huanliu says:

      For the record, AWS has 1,589,248 public routable IP addresses. I assume most of them are reserved for EC2, as S3, SimpleDB etc. only need a few publicly routable IP addresses. Our data shows a lot of racks have between 30 and 80 instances, then a lot with fewer than 10 instances. We need to analyze the data more, maybe for a future post when I have some spare time.

      • Great article, Huan Liu. Some clarifications and comments for you.

        AWS does NOT use blade servers. They target the use of 2U servers due to the price/performance sweet spot and the efficiency of cooling using larger fans. 1U servers are two small and 3U are too big. With 2U servers they will see roughly 16-20 servers per rack (8-10KVA power load).

        Modern datacenters don’t have density problems. At 64 servers per rack, you are looking at 32KVA (or more) per rack, which is extremely difficult to manage even in the most cutting edge datacenter installations.

        For most modern datacenters, even the largest, power is the primary constraint, cooling is the secondary constraint and space is far down the list of issues. So we know that AWS is not doing 64 blades per rack. BTW, this is confirmed with Chris Brown, the original lead developer of EC2 who is on our board of advisors. It’s also been largely implied or outright stated by James Hamilton of AWS in his many presentations.

        Not only this, but we know that AWS uses direct-attached-storage (DAS) for all ephemeral storage and blade servers simply don’t have enough disk drives to support AWS instance sizes as stated. This is another reason why 1U servers are unlikely, BTW.

        I think 16 x 2U servers is a conservative guess, chopping your numbers for VMs by 75%. That brings it to 113,600 servers, which is still a good number, although likely low as it doesn’t account for VPC-based servers which are probably one of the faster growing segments of AWS. My personal estimate is that they are around 150-160K servers by end of 2011.

        A few other items that will be helpful to you, I hope. First, EC2 rack capacity is not shared with other services. Second, another piece of data can be derived for those who are particularly earnest by pinging a VM during a reboot. While the VM is unavailable you will receive ICMP host unreachable messages from the dom0 instance which include a reverse DNS entry for that hypervisor node. Third, AWS EC2 evenly subdivides physical host resources based on instance type. Or, at least, has historically. This is probably changing a bit now with the newest Westmere/Romley systems being so dense. This means that most of the physical hypervisor nodes are hosting 1, 2, 4, 8, or 16 VMs. Averaged out, they are most likely seeing ~8VMs per physical node. At ~16 2U servers per rack, that means 128 VMs per rack on average. Most racks probably run at 80% filled capacity or roughly 90 VMs, which jibes with your observed number of instances per rack. HPC racks will obviously be well below this number and racks for smaller instance sizes may be well above.

        Looking at your methodology, I think the rack count is probably pretty accurate, although our information is that VPC has been run on completely different racks/systems originally, so it’s almost certainly lower than it should be. It would be hard to say how much, but at a guess, it’s another 20-30% on top of the number you already have, although that’s an educated guess.

        Again, I’ll stick by my number of 150-160K physical servers. It’s a guesstimate, but it’s a pretty good one.

        Thanks for the analysis.


      • huanliu says:


        Thanks for the detailed comments. I do not claim I know how many servers AWS cranks into each rack. At one time, I was offered a tour of the AWS Virginia data center, but I passed the offer because I know whatever I see will be covered by NDA, which makes it harder for me to make assumptions, unless I deliberately make the wrong assumption. The contribution of my post is around the number of server racks, along with your’s and other’s comments on the rack structure, I think the readers can have a better view of AWS’s infrastructure.

        That said, I want to elaborate on why I picked the 64 servers/rack assumption.

        – I analyzed AWS’s physical hardware before. The standard instances, which I assume are the most popular, run on a commodity hardware with only one 4-core CPU. Since an m1.xlarge instance almost takes up the whole physical server’s capacity, I would estimate the physical server to have less than 20GB memory and less than 2TB disk. Even the high-memory and high-CPU instances are not that impressive. They both have dual-socket 4-core CPUs. Since the largest instance takes up the whole physical server capacity, I expect the disk to be less than 2TB. I do not know where you get the ‘these servers are all running at least 12 x 3.5″ disk drives’ information from, but it is not supported by my hardware analysis. The server spec can be comfortably met by the HP blade servers, and if HP is able to crank 64 or 128 servers in a rack, I assume the power and cooling issues are solved by the vendor.

        – The Virginia data center added 700 racks in the 6 months I have been tracking. Assuming that the last few years had more growth, it is conceivable that the Virginia data center had fewer than 1000 racks back in 2009. If your number of 40,000 servers back then is credible, then AWS does have pretty dense racks.

        – In our data, 550 racks in Virginia have 250 or more virtual machines, and 35 of them have 500 or more virtual machines. The consolidation ratio sounds too high to me if there are only 20 servers in the rack.

        Thanks again for the comments.


      • oops… “chopping your numbers for *servers* by 75%”. Apologies for the typo.

      • Your analysis of the AWS hardware for the standard instance sizes is compelling. It’s deeper than I can go, so I’ll bow out of further discussion there. Perhaps my information is old. It’s certain that AWS is not using blades however. It would go against their frugal culture for one.

        For another, the power issue still isn’t handled well with your estimates. A quick look at dense systems like the HP BladeSystems show 1200W PSUs. There a 6 x 1200W PSUs in a single 8-blade enclosure. Or 48 x 1200W PSUs to achieve your 64 blades per rack. That’s 57.6 KVA / 2 = 28.8KVA.

        That kind of power density isn’t easy in any modern datacenter without very special cooling and power design. It’s not a question of whether HP can help or not. It’s a question of whether you have a very special next generation datacenter design. I doubt HP has this expertise. Most of this kind of expertise is at places like Google and Amazon.

        I think your methodology is sound, but the density you are coming up with sounds too high to me still.

      • MichaelT says:

        Looking at James Hamilton’s presentation here: they don’t look like blades to me – they look closer to 2U. You’d imagine density wouldn’t be a big concern for Amazon, because the square footage of datacentre space they maintain for servers must pale into insignificance compared to the square footage of warehouse space they maintain for products. And you can mix and match 2U servers from any vendor, you’re not locked into one supplier for replacement parts. Of course, this is only speculation on my part!

  5. Henning says:

    “We assume Amazon has dense racks, each rack has 4 10U chassis, and each chassis holds 8 blades for a total of 64 blades/rack.”

    4 chassis per rack, 8 blades per chassis -> 4*8 blades per rack.
    4*8 is 32 in my book, not 64.

    • huanliu says:

      Another typo, corrected in article. Thanks for spotting it, reminds me not to write when late at night 🙂 Fundamentally, this is an assumption. An HP blade chassis can hold 8, 16, or 32 servers in a 10U form factor. If AWS uses 1U rack server, then you can only hold 10 in a 10U form factor. We choose to assume 16…….

      • R says:

        I can’t say where exactly – I do not know if I would be bound by an NDA with the datacenter in the case of observing another customer of that datacenter’s stuff – but in an auxiliary Amazon facility in a city not served by EC2 at least yet (but that has edge stuff for their other services like CDN) they use only 1U servers in short 36U racks.

  6. Wow… very interesting!!! I’d love to talk more and learn a little more about what you know…. I build virtualization clouds for a SaaS software company (one of the biggest in the world) and we happen to share some floor space with Amazon in the US East datacenter.

  7. Pingback: Amazon is No. 1. Who’s next in cloud computing? — Cloud Computing News

  8. Pingback: Amazon Cloud Powered By ‘Almost 500,000 Servers’ | Streaming Media Hosting

  9. Pingback: Just How Big is the Amazon EC2 Cloud? | ServicesANGLE

  10. Pingback: Amazon Cloud Powered By ‘Almost 500,000 Servers’ | TransAlchemy

  11. Pingback: Amazon Cloud Powered By ‘Almost 500,000 Servers’ | JLD Express Shopping

  12. Pingback: GIASTAR – Storie di ordinaria tecnologia » Blog Archive » Amazon is No. 1. Who’s next in cloud computing?

  13. Rob says:

    Did you try running this method against a cloud that publicly discloses its numbers? Seems like an easy way to check if your method is accurate.

  14. Pingback: Amazon is No. 1. Who’s next in cloud computing? | Apple Related

  15. secabeen says:

    In public lectures by AWS employees they have clearly described their infrastructure as 1U-based. Heat dissipation is a big part of their costs, and blade servers trade heat for space, which is a bad trade, as space is cheap compared to heat management. I’d assume that their infrastructure is not dense.

  16. Pingback: Más de 450.000 servidores para sostener la nube de Amazon

  17. Asia is the market to grab.
    From my perspective (in Hong Kong), the only real choice is to host in Amazon Singapore.
    (but frequently I end up hosting in the US anyway)

    Sites wishing to offer services to China need somewhere nearby without the regulatory overhead of operating within the mainland.

    If Rackspace Cloud can setup somewhere in the region
    they’ll be a winner.

  18. Pingback: Hinter Amazons Cloud verbergen sich geschätzte 450.000 Server | CloudUser | Ξxpert

  19. yup says:

    In my experience most racks are limited (due to power constraints) to 4 bladeserver units per rack. So I’d agree with your assumption of 4 bladeservers per rack.

  20. Joeri says:

    How does this correspond with the 40.000 server estimate from this link?

    • This is an old estimate from 2009 based off of good inside information. My prediction is that AWS is growing at roughly 100% y/o/y. Which means:

      2009 – 40K
      2010 – 80K
      2011 – 160K
      2012 – 320K

      Original 2009 estimate is from fall. So, if my estimates and predictions are correct, then this reinforces that AWS EC2 is in the neighborhood of 150K servers at end of 2011, which I think is probably fairly close.

  21. Gavoir says:

    Blah blah blah, what a load of rubbish. They have ~17,000 cores ( which fits in about 1500 servers, give or take. At least, other places do. We have ~22,000 cores in 1800 servers.

    • eas says:

      You posted this elsewhere, and you are clueless. The Top 500 entry is a cluster amazon configured from a subset of their infrastructure.

      Just last fall, a company posted about a 30k core cluster they spun up for a client, and it’s likely that larger clusters have been spun up on AWS by orgs who don’t want to call attention to their work.

  22. Pingback: Amazon Cloud Powered by ‘Almost 500,000 Servers’ |

  23. Shyam says:

    Just based on intuition and experience number of servers at 454,400 seems too high.
    Form cost perspective,
    1) at 3K for each server that would cost about 1.3B USD. I am not sure what storage would cost.
    2) All the cost of networking devices, space build out etc.

  24. Joseph Lust says:

    How come every picture in the Amazon slide decks is of racks full of 1U machines? We’re talking 40 servers per rack, not 64. Where do you see the blade chassis?

  25. Pingback: Amazon’s Web Services Uses 450K Servers | WebProNews

  26. Pingback: Tech News

  27. Pingback: Amazon Cloud size and details « Tomi Engdahl’s ePanorama blog

  28. MikeD says:

    Let’s try it from a power angle… 322k servers running @ 250W each (extremely conservative) means the Virginia facility would need at least 80MW just to run the servers. So we’re talking 100MW+ to run the entire facility (cooling, power loss, etc). That is a ginormous facility. By contrast, facebook’s new DCs are scoped in the range of 40MW per phase.

    If you assume a more moderate power draw… say 350W per blade, that’s 130MW+ facility. A quick google search didn’t turn up any amazon DC specs, but I assume if it was that huge, it would make pretty big news. All you have to do to confirm the size is look at what the facility is publicly permitted for and check your numbers.

    • It’s not. They are likely 500W 2U servers, not blades. I know for a fact that the datacenters in Virginia are spread across multiple sites all connected with high speed fiber. It’s not a single massive facility. Nor is there a 1:1 mapping to AZes. It’s more nuanced than this. The DCs in use are probably closer to 25-40MW facilities, which is still pretty hefty for any kind of datacenter.

  29. Pingback: Linux servers keep growing, Windows & Unix keep shrinking | ZDNet

  30. Pingback: Amazon’s Data Center Secrets | Jupiter Broadcasting

  31. Pingback: ¿Los 5 proveedores de Cloud Services más grandes del mundo?

  32. Pingback: Today’s Links March 16, 2012

  33. Pingback: Who’s next in cloud computing?

  34. Pingback: Amazon: half a million servers, and counting | Beta Club Cloud Computing

  35. MikeD says:

    I should have added to my previous note, I believe these numbers are at least 3x too high if not more just based on my power statement above. It’s a real shame since so called “news” outlets are running with these number without actually putting them to a basic litmus test.

  36. Pingback: Amazon EC2 cloud is made up of almost half-a-million Linux servers | ZDNet

  37. Pingback: Linux servers keep growing, Windows & Unix keep shrinking | Malaysia Software Reseller | Dealer | PCWare2u

  38. Pingback: Amazon EC2 cloud is made up of almost half-a-million Linux servers

  39. Pingback: Stuff The Internet Says On Scalability For March 16, 2012 | Krantenkoppen Tech

  40. Pingback: Linux servers keep growing, Windows & Unix keep shrinking – Mr Pod Blog

  41. @mikeD — Huan Liu refers to the US-East region as a data center, but it is probably in fact four data centers, corresponding to the four ‘availability zone’s in each region. If each distinct AZ holds 80k servers at 350W, they are each 28 MW data centers.

    • 350W is too low. These servers are all running at least 12 x 3.5″ disk drives running at 10W each with 2xCPUs that are roughly 100-120W each. Plus most of the gear in the past 2 years is DDR3 and there are 12 or 16 DIMMs per system at 3-4W each. Higher density nodes for hosting VMs that use DAS should be considered to run at 350W without load and closer to 500W with a typical load. So closer to 40MW.

  42. Pingback: Amazon EC2 cloud is made up of almost half-a-million Linux servers – ZDNet (blog) | Virtualisation Server

  43. Pingback: Nearly a Half Million Servers May Power Amazon Cloud | Got2.Me

  44. Pingback: Linux servers keep growing, Windows & Unix keep shrinking | Linux eGuides

  45. Pingback: Nearly a Half Million Servers May Power Amazon Cloud « « Fix-Singh - Gadget RepairsFix-Singh – Gadget Repairs

  46. Pingback: Nearly a Half Million Servers May Power Amazon Cloud | Tux Doc

  47. Pingback: Nearly a Half Million Servers May Power Amazon Cloud | Geeklin

  48. Pingback: Nearly a Half Million Servers May Power Amazon Cloud « Breaking News « Theory Report

  49. Pingback: Sysadmin Sunday 71 « Server Density Blog

  50. Pingback: How Many Servers Does Amazon Have? « Random Walks

  51. Pingback: New estimate pegs Amazon’s cloud at nearly half a million servers |

  52. Pingback: Amazon EC2 cloud – data center size | Cloud Security and Risk

  53. Pingback: Privata moln | Binero blogg

  54. Pingback: トップを快走する AWS と、それを追いかける 7人の チャレンジャー « Agile Cat — in the cloud

  55. Pingback: BigData Counts « My missives

  56. Pingback: Amazon, et son demi million de serveurs | Cloudnews

  57. Pingback: Le jour où Amazon Web Services a retourné sa veste | Cloudnews

  58. Pingback: cloudpack Night #2 B) « すでにそこにある雲

  59. Pingback: Linux servers keep growing, Windows & Unix keep shrinking

  60. Pingback: Los proveedores de cloud computing más grandes del mundo | Acloudhosting

  61. Pingback: Reading Notes 2012-03-26 | Matricis

  62. Pingback: 近五十萬台伺服器打造Amazon雲端運算帝國 |

  63. Pingback: Amazon: half a million servers, and counting |

  64. Pingback: 3 Reasons Why a VMware/Rackspace Acquisition Would be Bad for Customers | ServicesANGLE

  65. Pingback: In memoriam voor het mainframe–lang leve de public cloud (het nieuwe mainframe?) « CTO @ Raet

  66. Pingback: Public Infrastructure and Platform Price Wars: How Will the Traditional Enterprise IT Vendors Respond? |

  67. Pingback: Cost of Amazon Web Services | ReStreaming

  68. Pingback: Quora

  69. Pingback: Amazon-Cloud: Graue Eminenz am Internet-Himmel | Binzl Online

  70. Pingback: Just how big is the Amazon cloud anyway? — Cloud Computing News

  71. Pingback: GIASTAR – Storie di ordinaria tecnologia » Blog Archive » Just how big is the Amazon cloud anyway?

  72. Pingback: Just how big is the Amazon cloud anyway? |

  73. Pingback: Public Infrastructure and Platform Price Wars: How Will the Traditional Enterprise IT Vendors Respond? | Consider the Source

  74. carllee1991 says:

    Reblogged this on carllee1991 and commented:
    it’s funny also amazing

  75. Pingback: Will Amazon outage ding cloud confidence? | Latest Technology - News & Articles

  76. Nathaniel says:

    Wow, cloud services are really exploding. I’d be interested to see the costs associated with using the services on the graph of the server racks.

  77. Pingback: Will Amazon outage ding cloud confidence? « What's Hot in Singapo

  78. Pingback: Alex Reid» Storm Knocks Out Amazon Web Services

  79. Pingback: Amazon Web Services ‘operating normally’ after storm outage | The RealN3ws Post

  80. Pingback: Amazon Web Services ‘operating normally’ after storm outage - Technology News | San Francisco Luxury Living

  81. Pingback: Cloud With a Chance of App Failure | Go Cloud

  82. tritchie says:

    We come at this from a different means – component count shipping to them per month (we know this) and anticipated life of their servers. Our very rough estimates are about 1M servers running with growth of maybe 10-15% per year.

    • huanliu says:

      care to share your calculation here? It would be helpful to the readers to gain a different perspective.

      Also, note that EC2 has grown a lot since this post. My latest number probably match yours pretty closely.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: