CONVERGED SF: IS MODELING THE ANSWER TO STRANDED CAPACITY?
Until about two decades ago, engineers built crude physical airflow models to figure out the best way to design IT equipment. Then the approach evolved, and they started using computer simulation for this kind of modeling, which lead to breakneck innovation in the design of electronic equipment.
That is according to Jonathan Koomey, a researcher and entrepreneur a lot of whose work revolves around the intersection of IT, energy efficiency and the environment. Koomey is a research fellow at Stanford University and has spent more than 20 years as a researcher at the US Department of Energy’s Lawrence Berkeley National Laboratory.
View the video and hear Dr. Koomey make his case for using predictive modeling to manage data centers first hand in his presentation from DCD: https://dcimnews.wordpress.com/2013/07/22/dr-jonathan-koomey-speaks-about-predictive-data-center-analysis/
Today, the data center industry has reached the point where we know enough about air flow and computers are powerful enough to make the same revolution in data center design. Koomey is speaking at the DatacenterDynamics Converged conference in San Francisco this Friday on the power of using predictive modeling in optimizing data center efficiency to unprecedented levels.
A typical modern data center houses servers, network switches, storage devices, all made by a variety of vendors and with different specifications. “All those things move around in unpredictable ways,” Koomey says.
If a data center is designed and built with a certain “idealized” set up in mind, the moment the operator starts making decisions about equipment placement that diverge from that set up, the facility’s power capacity gets fragmented, resulting in stranded capacity, which is interchangeable with stranded capital.
If, for example, a data center manager puts racks in that are spaced out slightly more than planned, the limit of space and power capacity of the facility becomes much lower than it was designed for.
“Capacity in the data center is like a game of Tetris,” Koomey says. In the data center, however, the blocks you fill the space with are not all identical, so you end up with what can amount to a lot of empty space.
“It can be like a third of your data center CapEx that’s stranded.”
Predictive modeling, Koomey says, is really the only way to maximize the utilization of existing capacity. Before moving a piece of equipment or introducing a new one to the environment, a computer-generated model can tell the operator what effect that change will have on the environment from a holistic perspective.
Using modeling solutions that represent each piece of IT equipment in the data center together with power and thermal monitoring and asset-management tools is the winning combination, he says. This kind of integration between different tools can really provide the kind of “what if?” scenarios that lead to smart decisions that minimize stranded capacity.