@Afcom @DataCenterWorld Using Simulation to Increase Efficiency, Resiliency, and Capacity

National-Harbor-1-676x300

http://fall2015.datacenterworld.com/

PRM 2

Using Simulation to Increase Efficiency, Resiliency, and Capacity

Presented by: Mark Seymour, CTO, Future Facilities

HIGHLIGHTS:

  • How to use simulation to address real- world data center operations problems
  • How to make data center designs more resilient to operational practice
  • How the design teams, engineers and operations can work collaboratively for maximum efficiency

The importance of the Data Center organizations today is higher than ever before. In most centers any IT business related changes can be accommodated if they fall within the data center design capacity. Alas, designs are based on assumptions which often cannot be followed in operation, leading to risks and costs for the data center and hence business as a whole. This hands-on workshop will allow the practitioners to use simulation to investigate problem scenarios and test solutions. The workshop is intended for engineering and operations to address real world problems during operation in their own DCs. The course will also cover how design teams make designs more resilient to operational practice. Finally the workshops will cover how everyone in the data center will operate their DC design more effectively over time.

Sneak Peak at Future Facilities video series: 

http://www.6sigmadcx.com/media/videos/Improving-dcim-and-monitoring-with-simulation.php

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

What is CFD and Engineering Simulation

Screenshot 2015-07-01 13.03.45

Watch this short video for an overview: http://www.6sigmadcx.com/media/videos/What-is-CFD-and-simulation.php

Screenshot 2015-07-01 13.04.30

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , | Leave a comment

Why aren’t #datacenters hotter?

Running a successful commercial data center is not for the faint of heart. With increased competition, profit margins are creeping downward. So one might assume data center operators would take advantage of something as simple as raising the equipment operating temperature a few degrees.

Letting the temperature climb can reap four percent energy savings for every degree increase, according to US General Services Administration. But most data centers aren’t getting warmer. Why is this?

The ASHRAE shake-up

For years, 20°C to 22°C (68°F to 71°F) was considered the ideal temperature range for IT equipment. In 2004, ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended the operating temperature range of 20°C to 25°C (68°F to 77°F) based on their study and advice from equipment manufacturers. Seeing the advantage, engineers raised temperatures closer to the 25°C (77°F) upper limit.

*Engineering Simulation is the risk free way to raise temperatures in your data centers and be able to quantify the cause and effect to increase opex/capex Video

temperature chart

temperature chart Source: DCD

ASHRAE shook things up in 2008 with the addendum Environmental Guidelines for Datacom Equipment, in which the organization expanded the recommended operating temperature range from 20°C to 25°C (68°F to 77°F) to 18°C to 27°C (64.4°F to 80.6°F). To ease concerns, ASHRAE engineers mention in the addendum that increasing the operating temperature has little effect on component temperatures, but should offer significant energy savings.

Also during 2008, Intel ran a ten-month test involving 900 servers; 450 were in a traditional air-conditioned environment and 450 were cooled using outside air that was unfiltered and without humidity control. The only recompense was to make sure the air temperature stayed within 17.7°C to 33.3°C (64°F to 92°F). Despite the dust, uncontrolled humidity, and large temperature swings, the unconditioned module’s failure rate was just two percent more than the control, and realized a 67 percent power saving.

In 2012, a research project at the University of Toronto resulted in the paper Temperature Management in Data Centers: Why Some (Might) Like It Hot. The research team studied component reliability data from three organizations and dozens of data centers. “Our results indicate that, all things considered, the effect of temperature on hardware reliability is weaker than commonly thought,” the paper mentions. “Increasing data center temperatures creates the potential for large energy savings and reductions in carbon emissions.”

Between the above research and their own efforts, it became clear to those managing mega data centers that it was in their best interest to raise operating temperatures in the white space.

Google and Facebook get hotter

By 2011, Facebook engineers were exceeding ASHRAE’s 2008 recommended upper limit at the Prineville and Forest City data centers. “We’ve raised the inlet temperature for each server from 26.6°C (80°F) to 29.4°C (85°F)…,” writes Yael Maguire, then director of engineering at Facebook. “This will further reduce our environmental impact and allow us to have 45 percent less air-handling hardware than we have in Prineville.”

Google data centers are also warmer, running at 26.6°C (80°F). Joe Kava, vice president of data centers, in a YouTube video mentions: “Google runs data centers warmer than most because it helps efficiency.”

That’s fine for Facebook and Google to raise temperatures, mention several commercial data center operators. Both companies use custom-built servers, which means hardware engineers can design the server’s cooling system to run efficiently at higher temperatures.

That should not be a concern, according to Brett Illers, Yahoo senior project manager for global energy and sustainability strategies. Illers mentions that Yahoo data centers are filled with commodity servers, and operating temperatures are approaching 26.6°C (80°F).

David Moss, cooling strategist in Dell’s CTO group, and an ASHRAE founding member, agrees with Illers. Dell servers have an upper temperature limit well north of the 2008 ASHRAE-recommended 27°C (80.6°F); and at the ASHRAE upper limit, server fans are not even close to running at maximum.

All the research, and what Facebook, Google and other large operators are doing temperature-wise, is not influencing many commercial data center managers. It is time to figure out why.

David Ruede, business development specialist for Temperature@lert, has been in the data center “biz” a long time. During a phone conversation, Ruede explains why temperatures are below 24°C (75°F) in most for-hire data centers.

Historical inertia

For one, what he calls “historical inertia” is in play. Data center operators can’t go wrong keeping temperatures right where they are. If it ain’t broke, don’t fix it, especially with today’s service contracts and penalty clauses.

A few more reasons from Ruede: data center operators can’t experiment with production data centers, electricity rates have remained relatively constant since 2004, and operator concern about usage surges may adversely affect contracts.

Both Ruede and Moss cited an often overlooked concern. Data centers last a long time (10-25 years), meaning legacy cooling systems may not cope with temperature increases. Moss mentions: “Once walls are up, it’s hard to change strategy.”

Chris Crosby, founder and CEO of Compass Data Centers, knows about walls, since the company builds patented turn-key data centers. That capability provides Crosby with first-hand knowledge of what clients want regarding operating temperatures.

Crosby says interdependencies in data centers are being overlooked. “Server technology with lower delta T, economizations, human ergonomics, frequency of move/add/change, scale of the operation… it’s not a simple problem,” he explains. “There is no magic wand to just raise temperatures for savings. It involves a careful analysis of data to ensure that it’s right for you.

“The benefits when we’ve modeled using Romonet are rounding errors,” adds Crosby. “To save 5,000 to 10,000 dollars annually for a sweatshop environment makes little sense. And, unless you have homogeneous IT loads of 5+ megawatts, I don’t see the cost-benefit analysis working out.”

Some people are warming: Hank Koch, vice-president of facilities for OneNeck IT Solutions, gave assurances there are commercial data centers, even colocation facilities, running at the high end of the 2008 ASHRAE-recommended temperature range. Koch says OneNeck’s procedure regarding white-space temperature is to sound an alarm when the temperature reaches 25.5°C (78°F). However, temperatures are allowed to reach 26.6°C (80°F).

So why aren’t data centers getting warmer when there’s money and the environment to be saved? The answer is, it’s not as simple as we thought. Data centers are complex ecosystems and increasing operating temperatures require more than just bumping up the thermostat setting.

This article appeared in the March 2015 issue of DatacenterDynamics magazineBy Michael P. Kassner 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , | 1 Comment

Future Facilities updates 6SigmaDCX – What’s New in Release 9.3

Screenshot 2015-06-19 12.02.32

Geared to making your life easier, they’re delivering new objects, speed improvements and a range of new features in the latest version of the DCX suite. They’ve built on the incredible success of 9.0 to make a minor release that punches way above its weight.

Join the free webcast to see what’s new in 9.3.

Register Here

Wednesday 1st July 2015

Time:   2:00am PST /   5:00am EST / 10:00am BST
8:00am PST / 11:00am EST /  4:00pm BST

Wednesday 8th July 2015

Time:   11:00am PST /   2:00pm EST /  7:00pm BST

Sneak Peak at Future Facilities video series: 

http://www.6sigmadcx.com/media/videos/Improving-dcim-and-monitoring-with-simulation.php

^5C314FDE1027EACEB7EEB1BFF4668A9047914467482D48544B^pimgpsh_fullsize_distr

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , | Leave a comment

How to improve DCIM and Monitoring with Engineering Simulation

Before we start, let us be clear about one thing: environmental monitoring and measurement systems are critical components in managing your data center. This is not a ‘one versus the other’.

5 Part Video Series – This series takes a look at the unforeseen risks in running a data center that relies solely on environmental monitoring.

Click Video > 

Part 1 – Thermal Mapping

Part 2 – Asset Safety

Part 3 – Deployment

Part 4 – Trending

Part 5 – Failure Analysis

Cabinet View Close

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , | Leave a comment

Just in Time Design Build Data Centers

When data centers are your business, the ability to deliver them quickly to customers is critical. Chris Crosby, CEO of Compass Data Centers, will share his thoughts on this topic and how to better deliver the product on time.

Related White-paper from Compass Datacenters: The Calibrated Data Center – Using Engineering Simulation

Related Video from Chris Crosby: Video The Calibrated Data Center

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , | Leave a comment

Cooling the Cloud: Binghamton PhD Student Sets Sights on Improving DataCenter Efficiency

image

Data centers — large clusters of servers that power cloud computing operations, e-commerce and more — are one of the largest and fastest-growing consumers of electricity in the United States.

The industry has been shifting from open-air cooling of these facilities to increasingly complex systems that segregate hot air from cold air. When it comes to cost savings, there are definite advantages to the aisle containment systems, which have been estimated to save 30 percent of cooling energy — but it’s not yet clear how they increase the risk of overheating, or how to design them for greatest safety and optimum energy efficiency.

That’s what Husam Alissa, a doctoral candidate in mechanical engineering, is trying to determine at Binghamton University’s state-of-the-art Center for Energy-Smart Electronic Systems (ES2).

In a poster titled “Experimentally Guided Advances of Computational Fluid Dynamics Modeling of Air-Cooled Data Centers in a Raised Floor Setting,” which won a contest at a recent meeting of ES2’s Industrial Advisory Board, Alissa lays the foundations for a systematic analysis of Binghamton’s new data center, using both empirical research and computer modeling.

Cabinet View Close

“We included some guidelines for the initial characterization of data center facilities, such as air flow, turbulence, pressure, velocity, momentum and cooling capacity,” says Alissa, who began his work in heat and mass transfer as an undergrad at the Hashemite University in Jordan and a master’s student at Jordan University of Science and Technology. “There are certain things data center modelers seem to oversimplify, and in order to effectively reduce the energy cost, it is important to create accurate models.”

At a large data center, the cost savings could be hundreds of thousands of dollars a year, which is why the solution is so important to ES2, a National Science Foundation Industry/University Cooperative Research Center. Partners in ES2 include Georgia Tech, the University of Texas at Arlington and Villanova University, along with Bloomberg, Comcast, Facebook, Future Facilities, IBM, Intel, NYSERDA and Verizon.

In 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity — enough electricity to power all the households in New York City twice over, according to the Natural Resources Defense Council. That figure is projected to reach 140 billion kilowatt-hours by 2020, dumping an electric bill of about $13 billion on American businesses.

During the next two years, Alissa expects to refine his analysis, cycling back and forth between data collection and computational fluid dynamics, validating his models along the way.

“Husam has done a very good job establishing a strong technical base for this research,” says IBM Senior Engineer Ken Schneebeli, who served as a mentor on the poster, along with ES2 director Bahgat Sammakia; Future Facilities’ Mark Seymour; IBM’s Roger Schmidt; and Villanova’s Alfonso Ortega. “This is a subject of critical business importance that has not yet been investigated at the university level or at the industry level, and Husam is establishing a basis to ably assert the accuracy of his modeling and methodologies. He has the patience, confidence and thoroughness to take on a project of this size.”

Source here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment