Piping hot servers in the turquoise ocean

It is a ghost bumps fact that datacenters; servers management play a crucial role in the global environmental change. Today’s biggest problem is failure to cut the rising use of power and investing in cooling projects. The result: carbon emission.

Implementing an efficient, effective and safe cooling system to remove waste heat is an ever-growing concern for data centers. Energy consumption by servers has risen dramatically within the last decade as power densities of server racks have increased.

  • While server racks of a decade ago consumed 250 W to 1.5 kW each, current power racks consume approximate 10 kW or more.
  • Projections for the next decade put power consumption at 50 kW.
  • Further, it is estimated that, by 2020, the data center industry will generate as much CO2 as the airline industry.
PUE = 1.0 (very efficient or ideal)                 PUE= 2.0 & 3.0 (inefficient)

More significantly, the survey showed that most of those data centers are not running at peak efficiency. In the United States alone, they annually suck up the equivalent of the energy output of 34 coal-fired power plants. A lot of that energy goes to powering the actual servers, but almost half of it goes to keeping the servers nice and cool, so they don’t overheat and crash.

The typical measure of this is PUE, or Power Usage Effectiveness, which is a ratio of power coming into the data center to how it’s being distributed across the IT workload, according to IDC Research Manager, Kelly Quinn.

“When you get to PUE ratios at 2 and above, you are looking at a massive amount of power going not just to the IT side of the house but also to the facilities side,” she said.

Google’s most recent (2014-2015) PUE is 1.12 for the trailing 12 months. The U.S. government’s guidelines are for 1.5. In 2013, One minute of data center downtime costs US$7,900 on average.

Downtime has reached an estimated cost of $700 billion per year to US businesses in 2016. International mega-retailer Amazon recently suffered downtime of around fifteen minutes; the brief outage was estimated to have cost them an incredible $66,240 per minute. Due to all these effects many of the biggest cloud giants has started to new experiments that the datacenter servers are in ocean or by means inside water.

Microsoft has an idea: dump the servers deep in the ocean, where the cool temperatures of the surrounding water will keep the servers cool 24/7, regardless of the seasons on the surface.

“Project Natick” is still in the research phase. Microsoft ran a successful test last year, submerging servers in a vessel called the Leona Philpot (a nod to a character in Microsoft’s Halo video games) for several months. The test was conducted 1 kilometer (0.62 miles) off the Pacific Coast. The idea is that eventually, the servers could run with little to no human maintenance for up to 10 years under the ocean.

Server being inserted into the hull

In addition to being a cool place to store computers (literally) the ocean has other advantages. About 3.5 billion people live within 125 miles of the ocean; making offshore data centers a good way to get vast amounts of people connected to the internet quickly, without using up precious landed real estate.


You can find pros and cons of underwater datacenters in below mentioned link.

Investdailynews

Microsoft’s Halo-inspired underwater data center project explained

Watch Microsoft’s introduction video below to learn more about Project Natick.

 

 

2 thoughts on “Piping hot servers in the turquoise ocean

Leave a Reply

Your email address will not be published. Required fields are marked *