Managing Airflow to Meet the Business Need

  Data Center Free Cooling Design Control and Delivery

By Mark Seymour via Upsite Blog

The data center industry can take pride in the huge innovations it has made in the energy performance of cutting-edge data centers. Only 10 years ago, many people would say that an average of 3kW per rack was the limit to air cooling, and a PUE of 2.5 was common even in state-of-the-art data centers.

Today, whether you argue that PUE is flawed or support it, the improvement in data center energy consumption is impressive. Even with spiraling power densities, data centers are now using less than 10% additional energy over and above IT energy – compared with 150% previously. Facilities where only modest improvements have been made are likely to bring this figure down below 50%. So, is that job done? In my opinion, it isn’t.

Focusing only on energy efficiency is a risky strategy: you need to manage the data center to ensure that you can keep installing IT to meet business needs while staying within the nominal space, power, network and cooling available. Similarly, you need to be sure that if your redundant systems fail, all of the IT equipment will be adequately cooled. You need to ask: is my installation resilient?

It is true that good practices, such as segregation via containment and blanking, tend to improve the performance of the data center cooling. But unfortunately they don’t guarantee it. Likewise, DCIM has plugged some of the gaps between IT and Facilities by providing tools to control process and share data. But these things don’t complete operational planning by filling the engineering gap. That is, they don’t help you to understand the engineering consequences of any actions you take, so that the actions with poor consequences can be avoided in favor of the ones with better outcomes. Using engineering simulation – for cooling, this is computational fluid dynamics (CFD) based tools such as 6SigmaDCX – in operational planning allows you to test any future plans before implementing them. These tests determine the impact on your infrastructure’s energy efficiency, IT thermal capacity and IT thermal resilience. They allow you to make important decisions about changes, such as:

  • Is raising cooling temperatures to save energy justified, compared with the negative impact this may have on IT thermal resilience and the number of servers you can safely install?
  • If you install IT equipment that does not fit within your conceptual design limits, will it meet your performance criteria?

These are two common questions – just the tip of the iceberg – that can be answered by applying engineering simulation to data center design, operational planning and capex-opex strategy decisions.

Your criteria will vary from one business to another. A web-hosting organization with multiple facilities may not be concerned with short delays if hardware fails, as the pages can be served seconds later from another facility or server. However, they will want to minimize the cost per page served by improving energy efficiency. On the other hand, an investment bank will be much more focused on whether their application is available. Availability and the ability to install IT as needed (capacity) will be much more important to them than energy efficiency. It is prudent and profitable to manage data center resources, particularly airflow, to meet the business need.

To learn more about how you can safely deliver improvements to the performance of your data center, and how you can ensure that your data center continues to meet your business needs even during failure scenarios, visit Future Facilities’ Media Center.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

The Mega DataCenter Challenge

IST2

Integrated System Testing – Paul Smethurst, CEO Hillstone, highlights the issues of load bank testing 100MW data centres.

Introduction

The insatiable demand for data coupled with the growth of cloudbased services has changed the European data centre landscape with the arrival of the mega data centre. The mega data centre, which allows global software giants like Microsoft, Google and Apple to provide our day-to-day IT services, is also the foundation for colocation providers such as Digital Reality Trust, Equinix, Telecity and Interxion to facilitate connectivity to the cloud for multinational conglomerates in banking, telecoms, oil and gas. With such a rapid expansion of cloud services, how do you commission mega data centres of 20MW, 40MW, 80MW and 100MW?

Historically, the largest ever load bank solutions have been in the oil and gas sector and used 50-70MW of containerised load banks. The load bank would be situated outdoors as part of very large temporary generator power projects. Fortunately, the evolution of the mega data centre has taken a practical, modular build approach, with roll out phases of dual halls at 2500KW or as a single 5000KW empty white space. However, such a reduction in rating does not reduce the challenges of sourcing the quantity of required load banks needed to complete integrated system testing (IST).

Integrated System Testing

The primary objective for data hall IST commissioning is to verify the mechanical and electrical systems under full load operating conditions, plus maintenance and failure scenarios, to ensure the data hall is ready for the deployment of active equipment. Todays’ IST requires a package of equipment that will closely replicate the data hall when in live operation. Server simulators, load banks, flexible cable distribution, automatic transfer switches, data logging for electrical power, environmental conditions (temperature and humidity) and the ability to incorporate the load banks within temporary hot aisle separation partitions give the foundations for a successful IST. These tools allow the commissioning report to present a computational fluid dynamics (CFD) model of the actual data hall operation.

The selection and use of server simulators, typically rated between 3KW to 6KW as per the expected IT rack loads, gives a granular distribution of low delta temperature across the data hall. Such detailed consideration of air distribution during testing is required due to the scale of the IST and the increased volumes of air that’s affected in the mega data centre environment. This replicated heat allows mechanical cooling systems to run at optimum design temperature, which ensures deployment of future active IT equipment will not overheat and fail. If the commissioning occurs prior to deployment of IT cabinets, server simulators can be used and housed in portable mini-towers for distribution across the empty space.

The requirement for using flexible cable distribution facilitates the ease of cabling high quantities of 5KW to 20KW-rated load banks to A and B feeds on a PDU or busbar infrastructure. If the cable distribution also includes the ability to automatically transfer the load then the commissioning team can replicate maintenance procedures and failure scenarios during the IST. In order to report the successful operation and performance of the room during the IST, the commissioning team will need to monitor and record electrical and environmental data. Having electrical data available within the load package avoids the use of a power analyser with exposed connections to live terminals in the data hall. When server simulators include temperature sensors extensive temperature analysis allows CFD modeling to be performed in the testing period. While the data centre will ultimately have a building management system (BMS) and a data hall fitted out with the latest DCIM system, they are unlikely to be fully operational at the time of testing. The project team should source a provider offering only server simulators at the earliest opportunity in the project and avoid alternative load bank solutions that will cause delays to the IST program.

IST

Common Mistakes

The restricted choice in the market dilutes the availability of load bank choices and equipment. Decisions to select on cost the wrong type of load bank solution can compromise the validity of the IST, but the unknown hidden problems will not manifest until the data hall goes live with active IT equipment. The temptation to choose 20KW 3 phase industrial space heaters rather than load bank server simulators effects the commissioning of mechanical cooling systems. The design of such heaters prevents the elevation of the ambient room temperature reaching the design criteria needed to commission the CRAC or AUH units. Some suppliers have removed the thermostatic controls only to find the space heater over heats and in some circumstance catches fire. The choice of large 110KW load banks can be justified when testing site equipment such as PDU panels, bus bars or switchboards to Level 3 ASHRAE requirements. These load banks provide a cost effective solution to proving the electrical infrastructure of the mega data centres, however they will create localised hotspots or areas of concentrated heat should they be used for the commissioning of the cooling systems. In extreme circumstances during tier certification the electrical load has been provided from 2KW infrared heaters or 1KW hair dryers. Infrared heaters create an ambient temperature of >40 degrees Celsius and wall skin temperatures of 70 degrees Celsius. Hair dryers are not designed for continuous operation as required in an IST. This type of low cost solution should not be considered to replicate the operation of IT equipment and risks costly delays while compromising the integrity of the testing program.

Achieving Cost Savings

Completing an IST to budget takes on a greater importance when commissioning a mega data centre, especially with the number of rental load banks that will be required. The increase in size of the facility will increase the time needed to complete the commissioning, so by combining the latest technologies traditional delays often associated with load banks can now be avoided. By selecting solutions that give enhanced data logging, the commissioning report will also give the client detailed levels of operating information and fully auditable reports for future tenants considering use of the space. Data logging can also be used in CFD models to ensure that the mega data centre is ready to use.

Originally Published Here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

@Afcom @DataCenterWorld Using Simulation to Increase Efficiency, Resiliency, and Capacity

National-Harbor-1-676x300

http://fall2015.datacenterworld.com/

PRM 2

Using Simulation to Increase Efficiency, Resiliency, and Capacity

Presented by: Mark Seymour, CTO, Future Facilities

HIGHLIGHTS:

  • How to use simulation to address real- world data center operations problems
  • How to make data center designs more resilient to operational practice
  • How the design teams, engineers and operations can work collaboratively for maximum efficiency

The importance of the Data Center organizations today is higher than ever before. In most centers any IT business related changes can be accommodated if they fall within the data center design capacity. Alas, designs are based on assumptions which often cannot be followed in operation, leading to risks and costs for the data center and hence business as a whole. This hands-on workshop will allow the practitioners to use simulation to investigate problem scenarios and test solutions. The workshop is intended for engineering and operations to address real world problems during operation in their own DCs. The course will also cover how design teams make designs more resilient to operational practice. Finally the workshops will cover how everyone in the data center will operate their DC design more effectively over time.

Sneak Peak at Future Facilities video series: 

http://www.6sigmadcx.com/media/videos/Improving-dcim-and-monitoring-with-simulation.php

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

What is CFD and Engineering Simulation

Screenshot 2015-07-01 13.03.45

Watch this short video for an overview: http://www.6sigmadcx.com/media/videos/What-is-CFD-and-simulation.php

Screenshot 2015-07-01 13.04.30

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , | Leave a comment

Why aren’t #datacenters hotter?

Running a successful commercial data center is not for the faint of heart. With increased competition, profit margins are creeping downward. So one might assume data center operators would take advantage of something as simple as raising the equipment operating temperature a few degrees.

Letting the temperature climb can reap four percent energy savings for every degree increase, according to US General Services Administration. But most data centers aren’t getting warmer. Why is this?

The ASHRAE shake-up

For years, 20°C to 22°C (68°F to 71°F) was considered the ideal temperature range for IT equipment. In 2004, ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended the operating temperature range of 20°C to 25°C (68°F to 77°F) based on their study and advice from equipment manufacturers. Seeing the advantage, engineers raised temperatures closer to the 25°C (77°F) upper limit.

*Engineering Simulation is the risk free way to raise temperatures in your data centers and be able to quantify the cause and effect to increase opex/capex Video

temperature chart

temperature chart Source: DCD

ASHRAE shook things up in 2008 with the addendum Environmental Guidelines for Datacom Equipment, in which the organization expanded the recommended operating temperature range from 20°C to 25°C (68°F to 77°F) to 18°C to 27°C (64.4°F to 80.6°F). To ease concerns, ASHRAE engineers mention in the addendum that increasing the operating temperature has little effect on component temperatures, but should offer significant energy savings.

Also during 2008, Intel ran a ten-month test involving 900 servers; 450 were in a traditional air-conditioned environment and 450 were cooled using outside air that was unfiltered and without humidity control. The only recompense was to make sure the air temperature stayed within 17.7°C to 33.3°C (64°F to 92°F). Despite the dust, uncontrolled humidity, and large temperature swings, the unconditioned module’s failure rate was just two percent more than the control, and realized a 67 percent power saving.

In 2012, a research project at the University of Toronto resulted in the paper Temperature Management in Data Centers: Why Some (Might) Like It Hot. The research team studied component reliability data from three organizations and dozens of data centers. “Our results indicate that, all things considered, the effect of temperature on hardware reliability is weaker than commonly thought,” the paper mentions. “Increasing data center temperatures creates the potential for large energy savings and reductions in carbon emissions.”

Between the above research and their own efforts, it became clear to those managing mega data centers that it was in their best interest to raise operating temperatures in the white space.

Google and Facebook get hotter

By 2011, Facebook engineers were exceeding ASHRAE’s 2008 recommended upper limit at the Prineville and Forest City data centers. “We’ve raised the inlet temperature for each server from 26.6°C (80°F) to 29.4°C (85°F)…,” writes Yael Maguire, then director of engineering at Facebook. “This will further reduce our environmental impact and allow us to have 45 percent less air-handling hardware than we have in Prineville.”

Google data centers are also warmer, running at 26.6°C (80°F). Joe Kava, vice president of data centers, in a YouTube video mentions: “Google runs data centers warmer than most because it helps efficiency.”

That’s fine for Facebook and Google to raise temperatures, mention several commercial data center operators. Both companies use custom-built servers, which means hardware engineers can design the server’s cooling system to run efficiently at higher temperatures.

That should not be a concern, according to Brett Illers, Yahoo senior project manager for global energy and sustainability strategies. Illers mentions that Yahoo data centers are filled with commodity servers, and operating temperatures are approaching 26.6°C (80°F).

David Moss, cooling strategist in Dell’s CTO group, and an ASHRAE founding member, agrees with Illers. Dell servers have an upper temperature limit well north of the 2008 ASHRAE-recommended 27°C (80.6°F); and at the ASHRAE upper limit, server fans are not even close to running at maximum.

All the research, and what Facebook, Google and other large operators are doing temperature-wise, is not influencing many commercial data center managers. It is time to figure out why.

David Ruede, business development specialist for Temperature@lert, has been in the data center “biz” a long time. During a phone conversation, Ruede explains why temperatures are below 24°C (75°F) in most for-hire data centers.

Historical inertia

For one, what he calls “historical inertia” is in play. Data center operators can’t go wrong keeping temperatures right where they are. If it ain’t broke, don’t fix it, especially with today’s service contracts and penalty clauses.

A few more reasons from Ruede: data center operators can’t experiment with production data centers, electricity rates have remained relatively constant since 2004, and operator concern about usage surges may adversely affect contracts.

Both Ruede and Moss cited an often overlooked concern. Data centers last a long time (10-25 years), meaning legacy cooling systems may not cope with temperature increases. Moss mentions: “Once walls are up, it’s hard to change strategy.”

Chris Crosby, founder and CEO of Compass Data Centers, knows about walls, since the company builds patented turn-key data centers. That capability provides Crosby with first-hand knowledge of what clients want regarding operating temperatures.

Crosby says interdependencies in data centers are being overlooked. “Server technology with lower delta T, economizations, human ergonomics, frequency of move/add/change, scale of the operation… it’s not a simple problem,” he explains. “There is no magic wand to just raise temperatures for savings. It involves a careful analysis of data to ensure that it’s right for you.

“The benefits when we’ve modeled using Romonet are rounding errors,” adds Crosby. “To save 5,000 to 10,000 dollars annually for a sweatshop environment makes little sense. And, unless you have homogeneous IT loads of 5+ megawatts, I don’t see the cost-benefit analysis working out.”

Some people are warming: Hank Koch, vice-president of facilities for OneNeck IT Solutions, gave assurances there are commercial data centers, even colocation facilities, running at the high end of the 2008 ASHRAE-recommended temperature range. Koch says OneNeck’s procedure regarding white-space temperature is to sound an alarm when the temperature reaches 25.5°C (78°F). However, temperatures are allowed to reach 26.6°C (80°F).

So why aren’t data centers getting warmer when there’s money and the environment to be saved? The answer is, it’s not as simple as we thought. Data centers are complex ecosystems and increasing operating temperatures require more than just bumping up the thermostat setting.

This article appeared in the March 2015 issue of DatacenterDynamics magazineBy Michael P. Kassner 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , | 1 Comment

Future Facilities updates 6SigmaDCX – What’s New in Release 9.3

Screenshot 2015-06-19 12.02.32

Geared to making your life easier, they’re delivering new objects, speed improvements and a range of new features in the latest version of the DCX suite. They’ve built on the incredible success of 9.0 to make a minor release that punches way above its weight.

Join the free webcast to see what’s new in 9.3.

Register Here

Wednesday 1st July 2015

Time:   2:00am PST /   5:00am EST / 10:00am BST
8:00am PST / 11:00am EST /  4:00pm BST

Wednesday 8th July 2015

Time:   11:00am PST /   2:00pm EST /  7:00pm BST

Sneak Peak at Future Facilities video series: 

http://www.6sigmadcx.com/media/videos/Improving-dcim-and-monitoring-with-simulation.php

^5C314FDE1027EACEB7EEB1BFF4668A9047914467482D48544B^pimgpsh_fullsize_distr

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , | Leave a comment

How to improve DCIM and Monitoring with Engineering Simulation

Before we start, let us be clear about one thing: environmental monitoring and measurement systems are critical components in managing your data center. This is not a ‘one versus the other’.

5 Part Video Series – This series takes a look at the unforeseen risks in running a data center that relies solely on environmental monitoring.

Click Video > 

Part 1 – Thermal Mapping

Part 2 – Asset Safety

Part 3 – Deployment

Part 4 – Trending

Part 5 – Failure Analysis

Cabinet View Close

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , | Leave a comment