Using CFD and Engineering Simulation Analysis for Product Development in DataCenters

By: Bruce Long, Design Engineer, Upsite Technologies   

Bruce Long (2)

Bruce has over 15 years of experience in the design and development of new products, from concept to production, of uninterruptible power supply products for both the IT and commercial markets.  He’s a certified U.S. Department of Energy Data Center Energy Practitioner (DCEP) as well as a past member of the Green Grid where he led the development of the EPA data center efficiency assessment service.

Using CFD and Engineering Simulation Analysis for Product Development in DataCenters

In the last ten years the innovation and advancement of data center design, with a focus on energy efficiency and operational optimization, is now commonplace. The modern data center is a complex and intertwined ecosystem dominated predominantly by power distribution, cooling infrastructure/air distribution, and airflow management at the room, row and cabinet level. The daunting task of today’s IT managers and data center engineers is to maximize the utilization of the given infrastructure, optimize the physical layout of the floor space and minimize energy consumption while still maintaining an extremely high level of availability.

The design and implementation of products for the data center space must be made within the context of the ecosystem as a whole. Of most importance are changes to the data center that affect the cooling infrastructure and air flow within the data center. Small changes at the room, row or cabinet level can have a significant, even catastrophic, effect on IT equipment temperatures and overall uptime of the data center. Fortunately, modern software tools, specifically Engineering Simulation and Computational Fluid Dynamics (CFD), provide an accurate means to model and simulate the effect of single, multiple or complex changes to a data center.

Data Center Supply Chain

From a new product development standpoint, CFD has many benefits:

  1. Reduces time and cost of progressive prototype creation and testing. By modeling different design proposals one can eliminate many of the design iterations that do not meet design, functional, or other requirements. This greatly reduces the number of physical prototypes that need to be built and tested.
  2. Identifies design deficiencies and provides data for design optimization without the need of physical models.
  3. Analyzes the performance of the proposed product in the data center and reduces risk to data center uptime and operations. It also provides an excellent and accurate means to understand how the new product affects air distribution, IT temperatures, and overall performance of the data center; and how the data center can be further optimized to take advantage of the new product or component.
  4. Allows for the most cost effective solution in the shortest period of time. Time to market is reduced eliminating most of the nonviable design iterations that need to be built and tested.

Upsite Configs.191

In summary, CFD modeling of new products for the data center, and resultant performance impact on the data center, should be considered mandatory for today’s complex data center ecosystem. Engineering Simulation using CFD provides the means to design and implement optimized solutions, without compromising uptime, while meeting requirements for increased energy efficiency, increased infrastructure utilization and decreased operational cost.

Learn how Upsite Technologies used CFD analysis in the development of AisleLok Modular Containment by downloading our free white paper: AisleLok® Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings.

Upsite Configs.195

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , | 1 Comment

Modernizing Enterprise #DataCenters for Fun and Profit


Modernizing Enterprise Data Centers for Fun and Profit, Jonathan Koomey’s online class for managers about modernizing data center operations will be offered from October 5 through November 13, 2015

Twenty first century data centers are the crown jewels of global business. No modern company can run without them, and they deliver business value vastly exceeding their costs. The big hyperscale computing companies (like Google, Microsoft, Amazon, and Facebook) are the best in the industry at extracting that business value, but for many enterprises whose primary business is not computing, the story is more complicated.

If you work in such a company, you know that data centers are often strikingly inefficient. While they may still be profitable, their performance still falls far short of what is possible. And by “far short” I don’t mean by 10 or 20 percent, I mean by a factor of ten or more.

Screenshot 2015-09-15 16.26.38

Image provided by Cole Crawford @coleinthecloud

A shocking waste

The waste embodied in most enterprise data center operations should be shocking to anyone who cares about financial performance. In a recent study of 4000 enterprise servers, my colleagues and I found that about 30 percent of these servers were comatose (using electricity but delivering no useful computing), a result consistent with results from McKinsey and the Uptime Institute. McKinsey also found that typical utilization for servers in enterprises “rarely exceeds” 6 percent. Many enterprises don’t even know how many servers they have, and couldn’t tell you their average utilization if they tried.

Why do such inefficiencies persist? The reasons are not primarily technical, but revolve around people, institutions, and incentives, and start with fractured management chains that have separate budgets for IT and facilities departments. Poor measurement, misplaced incentives, and failure to apply best-in-class technologies to full advantage lead to most data centers underperforming.

What can management do? First, centralize data center operations, with one boss, one team, and one budget. Only then will the team focus on the whole system costs and benefits of any proposed changes.


Second, tie data center performance to business performance, mapping data center infrastructure costs onto business processes, and using metrics that show the business implications of data center choices. Every part of the business should be able to compare total IT costs to benefits at the project level. Most importantly, companies should calculate – or at least, estimate – total costs and revenues per computation.

Finally, companies must use the power of IT to transform IT. The most advanced companies apply measurement and engineering simulation to optimize data center operations, tracking server inventories, monitoring real-time conditions, and projecting the impacts of proposed new IT deployments. Standardizing on a few server designs instead of dozens or hundreds reduces deployment times from months to days, and moving smaller computing users to internal clouds reduces deployment times from days to minutes. Those shifts accelerate experimentation and business innovation, both critically important to competitive advantage.

Modernize, or fall behind

Companies who fail to modernize their data centers risk falling behind competitors. Best-in-class enterprises centralize their data centers, they map data center performance onto business performance, and they use the power of IT to reduce the speed of deployment, avoid unplanned outages, and keep better tabs on operations.

Modern companies require modern data centers, but transforming existing operations requires senior management attention. Those inside the data center can’t make it happen. Only management can begin transforming data centers from cost centers into cost-reducing profit centers, and that’s a result that everyone can cheer.

Screenshot 2014-04-07 13.38.54

Original Article Here

This article appeared in the September 2015 issue of DatacenterDynamics magazine.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , | Leave a comment

Zombie Servers : They’re Here and Doing Nothing but Burning #Datacenter Energy


Most companies are far better at getting servers up and running than they are at figuring out when to pull the plug, says one expert. PHOTO: SIMON DAWSON/BLOOMBERG NEWS


There are zombies lurking in data centers around the world.

They’re servers—millions of them, by one estimate—sucking up lots of power while doing nothing. It is a lurking environmental problem that doesn’t get much discussion outside of the close-knit community of data-center operators and server-room geeks.

The problem is openly acknowledged by many who have spent time in a data center: Most companies are far better at getting servers up and running than they are at figuring out when to pull the plug, says Paul Nally, principal of his own consulting company, Bruscar Technologies LLC, and a data-center operations executive with experience in the financial-services industry. “Things that should be turned off over time are not,” he says. “And unfortunately the longer they linger there, the worse the problem becomes.”

Mr. Nally once audited a data center that had more than 1,000 servers that were powered on but not identifiable on the network. They hadn’t even been configured with domain-name-system software—the Internet’s equivalent of a telephone number. “They would have never been found by any other methodology other than walking around with a clipboard,” Mr. Nally says.

IV-AA761_ZOMBIE_9U_20150909121807In the U.S., the data centers that host everything from Facebook posts and Google queries to bank-account details and corporate spreadsheets burn a lot of energy. In fact, Google and Facebook care more about energy costs in their data centers than pretty much any other cost, because energy costs are the one thing the companies can reduce through design ingenuity. That is why these companies design their own data centers for maximum efficiency and then build them in states such as North Carolina or Oregon, near low-cost power supplies.

In 2010, the latest year for which there are estimates, data centers burned about 2% of all electricity used in the U.S.

Lately, the data-center industry has been getting a clearer picture of how widespread the zombie-server problem is. Earlier this year, Jonathan Koomey, a research fellow at Stanford University, looked at 4,000 servers used by clients of a data-center-efficiency company called TSO Logic Inc. He found that 30% of them hadn’t been used over the previous six months.

By Mr. Koomey’s calculation, there are more than 3.6 million comatose servers in the U.S. Keeping them powered up requires the services of an estimated 1.44 gigawatts of generating capacity—equivalent to three big power plants.

Even though Mr. Koomey’s data backed up what the industry had suspected for years, he still considers the results “appalling.”

Read the Full Article Here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , | Leave a comment

Breaking Design Barriers With #DataCenter Engineering Simulation

Screenshot 2015-09-08 12.52.25

Presented at DCD15 San Francisco – To scale up and out, Vapor IO is adopting a holistic approach to physical design that starts at the chip and ends at the Datacenter Facility. Cole Crawford, CEO, discusses Vapor IO’s application of 6SigmaDCX in achieving this goal. Watch here:

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , | Leave a comment

Managing Airflow to Meet the Business Need

  Data Center Free Cooling Design Control and Delivery

By Mark Seymour via Upsite Blog

The data center industry can take pride in the huge innovations it has made in the energy performance of cutting-edge data centers. Only 10 years ago, many people would say that an average of 3kW per rack was the limit to air cooling, and a PUE of 2.5 was common even in state-of-the-art data centers.

Today, whether you argue that PUE is flawed or support it, the improvement in data center energy consumption is impressive. Even with spiraling power densities, data centers are now using less than 10% additional energy over and above IT energy – compared with 150% previously. Facilities where only modest improvements have been made are likely to bring this figure down below 50%. So, is that job done? In my opinion, it isn’t.

Focusing only on energy efficiency is a risky strategy: you need to manage the data center to ensure that you can keep installing IT to meet business needs while staying within the nominal space, power, network and cooling available. Similarly, you need to be sure that if your redundant systems fail, all of the IT equipment will be adequately cooled. You need to ask: is my installation resilient?

It is true that good practices, such as segregation via containment and blanking, tend to improve the performance of the data center cooling. But unfortunately they don’t guarantee it. Likewise, DCIM has plugged some of the gaps between IT and Facilities by providing tools to control process and share data. But these things don’t complete operational planning by filling the engineering gap. That is, they don’t help you to understand the engineering consequences of any actions you take, so that the actions with poor consequences can be avoided in favor of the ones with better outcomes. Using engineering simulation – for cooling, this is computational fluid dynamics (CFD) based tools such as 6SigmaDCX – in operational planning allows you to test any future plans before implementing them. These tests determine the impact on your infrastructure’s energy efficiency, IT thermal capacity and IT thermal resilience. They allow you to make important decisions about changes, such as:

  • Is raising cooling temperatures to save energy justified, compared with the negative impact this may have on IT thermal resilience and the number of servers you can safely install?
  • If you install IT equipment that does not fit within your conceptual design limits, will it meet your performance criteria?

These are two common questions – just the tip of the iceberg – that can be answered by applying engineering simulation to data center design, operational planning and capex-opex strategy decisions.

Your criteria will vary from one business to another. A web-hosting organization with multiple facilities may not be concerned with short delays if hardware fails, as the pages can be served seconds later from another facility or server. However, they will want to minimize the cost per page served by improving energy efficiency. On the other hand, an investment bank will be much more focused on whether their application is available. Availability and the ability to install IT as needed (capacity) will be much more important to them than energy efficiency. It is prudent and profitable to manage data center resources, particularly airflow, to meet the business need.

To learn more about how you can safely deliver improvements to the performance of your data center, and how you can ensure that your data center continues to meet your business needs even during failure scenarios, visit Future Facilities’ Media Center.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

The Mega DataCenter Challenge


Integrated System Testing – Paul Smethurst, CEO Hillstone, highlights the issues of load bank testing 100MW data centres.


The insatiable demand for data coupled with the growth of cloudbased services has changed the European data centre landscape with the arrival of the mega data centre. The mega data centre, which allows global software giants like Microsoft, Google and Apple to provide our day-to-day IT services, is also the foundation for colocation providers such as Digital Reality Trust, Equinix, Telecity and Interxion to facilitate connectivity to the cloud for multinational conglomerates in banking, telecoms, oil and gas. With such a rapid expansion of cloud services, how do you commission mega data centres of 20MW, 40MW, 80MW and 100MW?

Historically, the largest ever load bank solutions have been in the oil and gas sector and used 50-70MW of containerised load banks. The load bank would be situated outdoors as part of very large temporary generator power projects. Fortunately, the evolution of the mega data centre has taken a practical, modular build approach, with roll out phases of dual halls at 2500KW or as a single 5000KW empty white space. However, such a reduction in rating does not reduce the challenges of sourcing the quantity of required load banks needed to complete integrated system testing (IST).

Integrated System Testing

The primary objective for data hall IST commissioning is to verify the mechanical and electrical systems under full load operating conditions, plus maintenance and failure scenarios, to ensure the data hall is ready for the deployment of active equipment. Todays’ IST requires a package of equipment that will closely replicate the data hall when in live operation. Server simulators, load banks, flexible cable distribution, automatic transfer switches, data logging for electrical power, environmental conditions (temperature and humidity) and the ability to incorporate the load banks within temporary hot aisle separation partitions give the foundations for a successful IST. These tools allow the commissioning report to present a computational fluid dynamics (CFD) model of the actual data hall operation.

The selection and use of server simulators, typically rated between 3KW to 6KW as per the expected IT rack loads, gives a granular distribution of low delta temperature across the data hall. Such detailed consideration of air distribution during testing is required due to the scale of the IST and the increased volumes of air that’s affected in the mega data centre environment. This replicated heat allows mechanical cooling systems to run at optimum design temperature, which ensures deployment of future active IT equipment will not overheat and fail. If the commissioning occurs prior to deployment of IT cabinets, server simulators can be used and housed in portable mini-towers for distribution across the empty space.

The requirement for using flexible cable distribution facilitates the ease of cabling high quantities of 5KW to 20KW-rated load banks to A and B feeds on a PDU or busbar infrastructure. If the cable distribution also includes the ability to automatically transfer the load then the commissioning team can replicate maintenance procedures and failure scenarios during the IST. In order to report the successful operation and performance of the room during the IST, the commissioning team will need to monitor and record electrical and environmental data. Having electrical data available within the load package avoids the use of a power analyser with exposed connections to live terminals in the data hall. When server simulators include temperature sensors extensive temperature analysis allows CFD modeling to be performed in the testing period. While the data centre will ultimately have a building management system (BMS) and a data hall fitted out with the latest DCIM system, they are unlikely to be fully operational at the time of testing. The project team should source a provider offering only server simulators at the earliest opportunity in the project and avoid alternative load bank solutions that will cause delays to the IST program.


Common Mistakes

The restricted choice in the market dilutes the availability of load bank choices and equipment. Decisions to select on cost the wrong type of load bank solution can compromise the validity of the IST, but the unknown hidden problems will not manifest until the data hall goes live with active IT equipment. The temptation to choose 20KW 3 phase industrial space heaters rather than load bank server simulators effects the commissioning of mechanical cooling systems. The design of such heaters prevents the elevation of the ambient room temperature reaching the design criteria needed to commission the CRAC or AUH units. Some suppliers have removed the thermostatic controls only to find the space heater over heats and in some circumstance catches fire. The choice of large 110KW load banks can be justified when testing site equipment such as PDU panels, bus bars or switchboards to Level 3 ASHRAE requirements. These load banks provide a cost effective solution to proving the electrical infrastructure of the mega data centres, however they will create localised hotspots or areas of concentrated heat should they be used for the commissioning of the cooling systems. In extreme circumstances during tier certification the electrical load has been provided from 2KW infrared heaters or 1KW hair dryers. Infrared heaters create an ambient temperature of >40 degrees Celsius and wall skin temperatures of 70 degrees Celsius. Hair dryers are not designed for continuous operation as required in an IST. This type of low cost solution should not be considered to replicate the operation of IT equipment and risks costly delays while compromising the integrity of the testing program.

Achieving Cost Savings

Completing an IST to budget takes on a greater importance when commissioning a mega data centre, especially with the number of rental load banks that will be required. The increase in size of the facility will increase the time needed to complete the commissioning, so by combining the latest technologies traditional delays often associated with load banks can now be avoided. By selecting solutions that give enhanced data logging, the commissioning report will also give the client detailed levels of operating information and fully auditable reports for future tenants considering use of the space. Data logging can also be used in CFD models to ensure that the mega data centre is ready to use.

Originally Published Here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

@Afcom @DataCenterWorld Using Simulation to Increase Efficiency, Resiliency, and Capacity



Using Simulation to Increase Efficiency, Resiliency, and Capacity

Presented by: Mark Seymour, CTO, Future Facilities


  • How to use simulation to address real- world data center operations problems
  • How to make data center designs more resilient to operational practice
  • How the design teams, engineers and operations can work collaboratively for maximum efficiency

The importance of the Data Center organizations today is higher than ever before. In most centers any IT business related changes can be accommodated if they fall within the data center design capacity. Alas, designs are based on assumptions which often cannot be followed in operation, leading to risks and costs for the data center and hence business as a whole. This hands-on workshop will allow the practitioners to use simulation to investigate problem scenarios and test solutions. The workshop is intended for engineering and operations to address real world problems during operation in their own DCs. The course will also cover how design teams make designs more resilient to operational practice. Finally the workshops will cover how everyone in the data center will operate their DC design more effectively over time.

Sneak Peak at Future Facilities video series:

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment