Applying the Scientific Method in Data Center Management


Data center management isn’t easy. Computing deployments change daily, airflows are complicated, misplaced incentives cause behaviors that are at odds with growing company profits, and most enterprise data centers lag far behind their cloud-based peers in utilization and total cost of ownership.

One reason why big inefficiencies persist in enterprise data centers is inattention to what I call the three pillars of modern data center management: tracking (measurement and inventory control), developing good procedures, and understanding physical principles and engineering constraints.

Another is that senior management is often unaware of the scope of these problems. For example, a recent study I conducted in collaboration with Anthesis and TSO Logic showed that 30 percent of servers included in our data set were comatose: using electricity but delivering no useful information services. The result is tens of billions of dollars of wasted capital in enterprise data centers around the world, a result that should alarm any C-level executive. But little progress has been made on comatose servers since the problem first surfaced years ago as the target of the Uptime Institute’s server roundup.

Read more: $30B Worth of Idle Servers Sit in Data Centers

One antidote to these problems is to bring the scientific method to data center management. That means creating hypotheses, experimenting to test them, and changing operational strategies accordingly, in an endless cycle of continuous improvement. Doing so isn’t always easy in the data center, because deploying equipment is expensive, and experimentation can be risky.

Is there a way to experiment at low risk and modest cost in data centers? Why yes, there is. As I’ve discussed elsewhere, calibrated models of the data center can be used to test the effects of different software deployments on airflow, temperatures, reliability, electricity use, and data center capacity. In fact, using such models is the only accurate way to assess the effects of potential changes in data center configuration on the things operators care about, because the systems are so complex.

Recently, scientists at the State University of New York at Binghamton created a calibrated model of a 41-rack data center to test how accurately one type of software (6SigmaDC) could predict temperatures in that facility and to create a test bed for future experiments. The scientists can configure the data center easily, without fear of disrupting mission critical operations, because the setup is solely for testing. They can also run different workloads to see how those might affect energy use or reliability in the facility.

Read more: Three Ways to Get a Better Data Center Model

Most enterprise data centers don’t have such flexibility, but they can cordon off sections of their facility as a test bed, as long as they have sufficient scale. For most enterprises, such direct experimentation is impractical. What almost all of them can do is create a calibrated model of their facility and run the experiments in software.

What the Binghamton work shows is that experimenting in code is cheaper, easier, and less risky than deploying physical hardware, and just about as accurate (as long as the model is properly calibrated). In their initial test setup, they reliably predicted temperatures with just a couple of outliers for each rack, and those results could no doubt be improved with further calibration. They were able to identify the physical reasons for the differences between modeling results and measurements, and once identified, the path to a better and more accurate model is clear.

We need more testing labs of this kind, applied to all modeling software used in data center management, to assess accuracy and improve best practices, but the high-level lesson is clear: enterprise data centers should use software to improve their operational performance, and the Binghamton work shows the way forward. IT is transforming the rest of the economy, why not use it to transform IT itself?

About the author: Jonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University and is one of the leading international experts on the energy use and economics of data centers.

Sign up for Jonathan Koomey’s online course,Modernizing Enterprise Data Centers for Fun and Profit. More details below.

Originally published on Data Center Knowledge 


About DCIMdatacenter
This entry was posted in Data Center DCIM Datacenter Datacenters Datacentre and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s