CBRE Data Center Case Study: Global Bank saves 10 Million plus



Download the full CBRE Case Study Here

View the Overview App Note Here

Global Property Advisor, CBRE, has published this white paper “At the End of the Day – It’s Lost Capacity” as a roadmap to a fully deployed data center. When CBRE asked Future Facilities to help save energy at the EMEA data center headquarters for a global banking organization, we sprang into action.

We used our pioneering ACE Datacenter Performance Assessment – made possible thanks to our industry-leading 6SigmaDCX software tool to highlight the potential for energy savings and to show CBRE how to recover lost capacity.

Crucially, by creating a Virtual Facility (VF) model in our software suite of each hall, improvements were tested before implementation without risking the availability of the 6,000 IT assets. That allowed us to give their client:

> Energy savings: $1.15M per year for a single datacenter of 22,000 sqft

> Recovery of lost IT loading capacity: CBRE estimates 350 kW

> Total Capacity Recovered: $8.75 Million

By the time we had finished, we’d not only saved vast sums of capital, but had also delivered a previously unheard of level of operational flexibility in those two data halls.

Learn about the transformation and understand why our Simulation and Predictive Modeling is a “must have” for every data center owner-operator. We’ll also show the limitations and why this client could not rely on their DCIM to get these types of operational returns.

Download the CBRE Case Study Here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , | Leave a comment

North America Data Center Investment trends and forecast to 2020

Originally posted on :

The North America region in 2014 is the largest in the world. It comprises 33.1% of the world’s data center asset base by space and 33.8% by power and a lower proportion of 29.5% by investment. By 2020, the proportion of the world’s data center space and power that are located in North America will decline to 31% of white space, 33.1% of power and 27% of data center investment. The lower level of investment comparative to the region’s asset base is the reverse of the situation seen in the smallest regions. In North America the massive critical mass of the industry and the strength of competition in the region permits economies of scale. The North American market is also the most technologically advanced and able to reduce investment costs through the adoption of suitable technologies.

The North American region comprises two highly evolved and long-established markets and a number of the key asset metrics confirm trends common to other…

View original 404 more words

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Leave a comment

Model & Improve Your Data Center’s Availability, Capacity & Efficiency

Screenshot 2015-04-10 13.32.09

The ACE Data Center Performance Metric Looks At The Impact Of Facility, Real Estate & IT Plans

When your data center is new, it’s designed to be as efficient as possible when it comes to cooling, capacity, and performance. As you add equipment over time, however, things can become a whole lot messier.

“Every data center,” says Andy Lawrence, vice president of research, data center technologies, at 451 Group, “is initially designed to be as efficient as possible in cooling the IT equipment for a defined amount of energy, to work to 100% of its design capacity, and to work to a level of risk that was agreed at the outset according to business criteria.”

That ideal doesn’t last. Most data centers start out empty and fill, he says, and may never come close to their design efficiency in terms of energy efficiency or other elements. “Meanwhile, decisions about power distribution and cooling, perhaps to reduce energy waste, may increase the risk of downtime.”

Maintain Efficiency and Capacity

To maintain ideal levels of capacity and efficiency over time, you need to understand how your plans and objectives impact the data center. There are metrics to help you do that. PUE, for example, is useful if you’re trying to work out the energy efficiency of your cooling overhead or data center infrastructure overhead, Lawrence says. Coefficient of power, or COP, relates to the efficiency of using electricity to provide cooling, he says.

Metrics such as PUE and COP don’t provide a big-picture view of the data center’s health when it comes to availability, capacity, and efficiency. “Other metrics quantify one aspect of data center performance in isolation of other aspects,” says Sherman Ikemoto, director at Future Facilities. “System-level performance blindness degrades the system.”

Meet Objectives

Future Facilities worked to overcome those issues in designing the ACE Data Center Performance metric. “Data centers must do three things: protect the IT hardware, support a specified amount of IT hardware (usually expressed in kW or MW of IT power draw), and meet operations budget constraints,” Ikemoto says.

Most organizations distribute responsibility for these objectives across individuals or teams, he says, which makes it difficult for organizations to understand the impact of one team on another or on the data center as a system. “The ACE Score quantifies the impact of facility, real estate, and IT plans on the objectives of others and on the operational intent of the data center system.”

With a view of the entire data center, you can strike a balance between operational and system performance, Ikemoto says. For example, many companies are installing containment to improve PUE. But PUE overlooks the impact of containment on the health of the IT equipment or on the ability to support the intended IT capacity. The ACE Score, however, reveals the impact of these changes on PUE, IT, and data center health simultaneously, enabling operators to greatly improve the cost/benefit assessment of a change like containment, he says.

The metric can help with other situations, such as determining the additional IT load your data center can accommodate, quantifying server availability by predictively modeling power and cooling failure, or determining cooling efficiency by visualizing airflow and temperature.

The Best Data Center

The ACE metric, Lawrence says, “is quite good for a particular situation that a lot of data center managers are in: How do we get the best out of the data center we’ve designed? And how do we do our best to meet those design criteria? As the metric points out, that involves tradeoffs in availability, capacity, or efficiency.”

Lawrence says the ACE metric is helpful for organizations with availability, capacity, or efficiency constraints and for organizations with more than one data center where they’re considering moving or consolidating workloads or considering building another data center.

The ACE metric is a simple way of saying, against these models, we’ll run out of availability, capacity, or efficiency if we make this change, Lawrence says. “You won’t know unless you can model for it and see where you reach the limits.”

ACE Metric

Availability: % of IT configuration (in kW or MW) that is connected to sources of redundant power and cooling.

Capacity: % of full IT configuration (in kW or MW), projected from installed IT configuration, that is connected to sources of redundant power and cooling.

Efficiency: DCiE (inverse of PUE).

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , | Leave a comment

How to predict Data Center failures


As news hits of 3 mobile phone Data Center failures in Ireland, the benefits of guarding against thermal downtime by using simulation are becoming more prudent.

It’s becoming more common for Data Centers to guard against thermal downtime by simulating the affects of electrical failure in software like DCX. Increases in temperatures in the Data Center can be predicted allowing operators to take preventative action rather than make reactive fixes. Check out for more information.

Check out this video that shows simulating failure for cooling which can also be done for power, showing all hardware that will be affected.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , | Leave a comment

Improving Monitoring w/ Simulation Part 5 Failure Analysis

Before we start, let us be clear about one thing: environmental monitoring and measurement systems are critical components in managing your data center. This is not a ‘one versus the other’.

This video series takes a look at the unforeseen risks in running a data center that relies solely on environmental monitoring.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , | Leave a comment

How to Calibrate a Data Center

What is a Valid Data Center Model? An Introduction to Calibration for Modeling & Simulation

Future Facilities’ CTO Mark Seymour publishes the first in a series of white-papers discussing model refinement and calibration when predictively modeling a data center

Download the pdf here:

Future Facilities, a leading provider of data center design and operations management software, today announced that Mark Seymour, data center cooling expert and chief technical officer at Future Facilities, has published the first in a series of white papers explaining the importance of model refinement and calibration when predictively modeling the availability, physical capacity and cooling efficiency of a data center. Aimed at owner-operators, What is a Valid Data Center Model? An Introduction to Calibration for Predictive Modeling brings clarity to an area of data center operations that is increasingly important.

“while the overall facility is complex, many of the individual elements can be individually assessed”

For many data center owner-operators, using computational fluid dynamics (CFD) simulations to predictively model the impact that future changes will have on availability, physical capacity and cooling efficiency (ACE), or to help resolve ACE problems in a data center, is second nature.

GUI Model Image

And, despite the historical connotations that CFD brings to mind – a complex and intimidating solution requiring expert knowledge to use – the reality is that predictive modeling has never been simpler or easier for the lay person to take advantage of.

But the success of predictive modeling still lies ultimately in the hands of the user. Summed up colloquially as “garbage in, garbage out”, the most pressing dangers for predictive modelers are that their computer models lack fidelity and are uncalibrated. Why? Because low-fidelity models (garbage in) lead to inaccurate results (garbage out) that bear no resemblance to reality (uncalibrated).

For some, the solution to the “garbage in, garbage out” challenge is not to improve the model and calibrate it, but to lazily fix the results of the model to match what is being seen in real life. “That renders the model useless”, says Seymour. Instead, “owner-operators and consultants must exercise due diligence: review and measure the actual installation, then improve the accuracy of the model until it produces dependable results”.

So, how do you make the model dependable? How do you calibrate it? Seymour’s paper provides introductory answers to exactly that question, highlighting that it is a fairly simple process, but one that benefits from a systematic approach. He promises follow-on papers later in the year that will cover specific problem areas, but for the moment he reveals in this paper what 20 years’ experience has taught him are the most common mistakes that people make.

Using real life examples illustrated using Future Facilities’ 6SigmaDC suite of tools, he shows how to overcome systematic errors affecting floor tiles, grilles, cabinets, cable bundles and other common data center objects. Seymour also provides advice on the “tough modeling decisions”, including whether or not to model poorly defined obstructions “such as water pipes under a cooling unit”. Specific advice is provided for calibration of the air supply system and its component parts, with Seymour cautioning upfront, “Do not overlook the fact that it is not just the bulk airflow that matters, but also the flow distribution”.

By the end of the text, the reader will not only have a sound appreciation for good, systematic calibration practice, but also understand that, “while the overall facility is complex, many of the individual elements can be individually assessed”. Seymour concludes by saying, “this will make it possible to diagnose why the initial model does not adequately represent the facility… normally, it won’t!”.

Download the pdf here:

About Mark Seymour:

Mark Seymour is chief technical officer and a founding member at Future Facilities, which this year celebrates its tenth anniversary. With an academic background in applied science and numerical mathematics, Mark enjoyed a successful career in the defense industry for over a decade before moving to the commercial sector. There he has since accumulated 20 years’ experience in the cooling of data center and communication environments. A recognized expert in the predictive modeling of airflow for building HVAC and data centers in particular, Mark is an industrial advisory board member of NSF-ES2 research program and a corresponding member actively participating in ASHRAE TC9.9.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , | Leave a comment

Video: The Calibrated Data Center

DCK Datacenter Industry Perspective

Watch the video here:

Screenshot 2015-03-27 09.23.15

The issue with many new “next big things” is that they tend to skip one or more essential steps. In this brief video, Compass Datacenters’ CEO, Chris Crosby, will explain why calibrating your data center is the essential step required to accurately measure and model data center performance and provides the necessary bridge to new capabilities like the Software Defined Data Center.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , | Leave a comment