Uncovering the Value of Calibrating #DataCenter #CFD Models

Why should you care about engineering simulation and calibration in your data center?

 

Future Facilities has teamed up with the Center for Energy-Smart Electronics Systems (ES2) to uncover the true value of calibrating a virtual facility model from thermal and airflow perspectives.

Video Gallery        Calibration Paper        ES2 Website        Technical Papers

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , | Leave a comment

Your #datacenter can’t change fast enough

Originally posted in DatacenterDynamicsoptimized

How do you create data centers that can respond to business needs with speed, control and visibility?

Jon Leppard Jon Leppard

The speed and resilience of a data center is something we obsess over – making efficiencies and improving performance is what makes working in IT so satisfying. We live for the next tech breakthrough that can process more, store more, or just work harder on less power. Luckily, with Moore’s law (well, roughly), we’ve enjoyed a steady supply of these breakthroughs over the past decade.

But, when the pressure is on to make a change, most data centers are slow. There’s a lot to consider, especially when we are talking about larger facilities with mission critical workloads, multiple systems, failsafes, cooling setups etc. to take into account.

Do we want fast change?

Pit stop

The speed of business is only increasing – the successful launch of in-memory analytics such as SAP HANA proves that businesses are taking competitive advantage not just from big data, but fast data. Time is, more than ever before, money. The trend over the near term appears to be hardware struggling to keep up with the demands placed upon it by new software workloads.

When the pressure therefore inevitably comes down to begin migration to faster, more efficient data centers, the DC team’s ability to control the risk of the process is far too limited. When varied workloads are pushed into the data center by the business, or something needs to change within the facility itself, it poses questions about capacity and resilience which need very precise answers in order to avoid lengthy and costly mistakes.

Can we change fast?

The reality for most data center managers is that answering questions like do we have the capacity? or are we exposing ourselves to too much risk? is often a combination of historic trends, experience and a bit of educated guesswork. Therefore, quite rightly, the IT team will appeal for time from the business to assess and get things as close to “right” as possible – who can blame them? The data center is dealing with mission critical applications, SLA’s and customer data, failure is unacceptable.

Right now, the facilities and IT teams in this situation simply can’t change fast. But businesses don’t have the time to wait on the data center. The collective data center industry is doing what it’s done for the past decade – it’s waiting for a technological breakthrough to save the day.

Is there a solution on the horizon?

Yes, there certainly is! New designs for software-defined, homogenized facilities look like they will be going a long way towards responding to this need. The only problem here is that for most organizations (those without ludicrously large IT budgets) the software-defined data centers are between five and 10 years away.

That’s a bit beyond the scope of the timeline most businesses are setting their data center teams to implement change, sadly.

We therefore need to create data centers that can respond to these business needs with absolute control and visibility, to remove risk for the equation. We call this concept the ‘Fluid Data Center’.

What’s a Fluid Data Center?

Rather than an amazing new cooling system, the Fluid Data Center is a concept where capacity and risk can be accurately and quickly snap-shotted. A Fluid Data Center can “pour” my resource towards either end of this spectrum with safe knowledge of the impact this will have on either the capacity or the resilience of the entire facility.

It can do this on a case by case basis, and can do this quickly.

It’s achieved by knowing exactly what is happening currently within a data center and then using advanced engineering simulation tools to map out what the impact of any given change would be. Not just in terms of power draw, but on what the impact would be on the air flow of a room, the additional strain on a given AC unit etc. down to the fine detail.

What this tends to result in, aside from happier business teams, is incredibly efficient data centers. At the moment the only solution to not knowing precisely the limit of a DC is to factor in a healthy safety margin – this could be an extra AC unit or two, in simple cases. A fluid data center has these turned off – bringing down PUE – and is able to smartly communicate with the business that these will be turned back on if X occurs.

A Fluid Data Center knows exactly how much juice it has, and the size of the container – and it uses this information to act faster and safer than human predictions could ever achieve.

But the best bit about it is that it’s a solution to a growing problem that is available right now, instead of on the horizon.

Jon Leppard is a director at Future Facilities, a company that specializes in engineering simulation tools.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , | Leave a comment

Future Data Centers are Dynamic…but what about today’s Facilities?

Screenshot 2015-12-04 09.30.45

Future models for the data center are highly dynamic; workloads easily transferred and managed across a highly homogenized facility, that is effortlessly managed by sophisticated software. Or at least so the prevailing concepts within the industry would suggest. But what does this mean for today’s data centers – facilities that may be in operation for ten years or more? This question is answered by Matt Warner, Development Manager at Future Facilities.

The Future of Data Centers, Delivered Today

I have a goal for every data center manager: make your facility more responsive, risk-free and predictable. It sounds like a lot to ask, but it’s an achievable goal. I know because I’ve seen a lot of businesses do it.

This vision for better data centers in the here and now is encompassed within what we at Future Facilities call ‘The Fluid Data Center.’ However our industry is not inherently fluid, and there are a great number of data centers out there yet to make this change. Instead, most data centers exist in a different state.

When we speak to IT and facilities managers across our industry, there is a recurring pattern – current facilities are being managed as what we call ‘Static Data Centers.’ Before I explain what this means, it’s important to note that this is not a criticism. The reality is that there are a huge number of pressures in the management of the data center that traditionally have all but demanded that facilities are ‘static.’

So what is a static data center? Typically it’s characterized as a facility in which there is a reluctance to make changes, and where new deployments or reconfigurations of hardware or software are slow and risky. A static facility is usually managed either looking backwards to past trends, or forwards with the educated guesswork of experienced data center professionals. It is all but impossible to make this sort of facility highly flexible while remaining absolutely resilient.

Leaving Static Facilities Behind

These static data centers are in so many ways a legacy. Historically, the incremental purchase and installation of hardware, initiated in response to erratic demands from the business, was the norm. The last ten years or so have seen this situation become far more complex for most businesses. Their data centers are diverse, with a wide variety of installed technologies, frequently leading to fragmentation.

This conspiracy of factors has results in two overriding, and highly detrimental trends:

  1. Most IT teams have over-provisioned and under-utilized within their data center to safeguard the delivery of compute power for which they’re ultimately responsible. This strategy has been deployed to ensure that they don’t allow their applications to fall over
  2. Facilities teams, similarly, have made their own sacrifices to safeguard the resilience of their mission critical facility. Typically they have over-engineered and over-cooled to minimize downtime

So how do we make a data center more ‘Fluid’? We start by reminding ourselves why – outside of the operational parameters and SLA to which we’ve agreed – we care about making our data center more flexible. Ultimately it comes down to business facilitation.

A Critical Change

The data center is often referred to as a ‘critical facility’ within the organization. Indeed today its role is more important than ever. The business itself is subject to more rapid changes as a result of rapidly advancing technology, demands from consumers for immediate responses and hugely competitive industry landscapes. This is an era of unparalleled interaction between lines of business and technology, and the demands on the data center are only increasing. Hence the idea of a ‘safe but static’ data center (which in honesty was never a reality anyway) becomes anachronistic. There’s just no place for it today.

What’s more, in the face of this more dynamic demand, budget pressures on IT and Facilities remain high. A static data center configuration is wasteful on resources. Therefore the cost of compute ($/W) results in a correspondingly high cost of delivering business outcomes. These sorts of difficulties may not arise in the cutting edge of Marketing discussions or at the latest Sales conference, but rest assured that the CIO and the Board upon which he sits are looking at those numbers with watchful eyes.

Becoming Fluid

Gone is any remnant of “keeping the lights on at any cost.” Today the mission is “optimizing how to keep the lights on.” And optimizing means combining both performance and risk management.

On an operational level, an efficient data center is one where power and cooling supplied by the Facility balances the IT demand. In more commercial terms, the data center must also have the ability to maintain this balance while being completely flexible to the needs of the business. That means changing things – often, and with no risk of downtime.

In the current model of the data center, this doesn’t work, primarily because of a decision making gap between IT and Facilities. The issue lies in the fact that both are currently operating independently. Many organizations have deployed DCIM technology with a goal of crossing the data and process gaps that are found within any data center facility. This is a positive step, but doesn’t cover all bases. In fact, the majority of facilities today are managed whereby the operator makes decisions without any clear insight into the engineering impact they may have on the other side of the gap. In other words, IT doesn’t know how it will affect Facilities, and vice versa.

Data centers today therefore suffer from an ‘engineering gap.’

We Must Close the Engineering Gap

This engineering gap isn’t just a nice buzzword. It’s a real thing. And a real problem too. Within almost any facility you choose to inspect, you’ll find this gap, and it’s exposing these organizations to risk of:

  • A loss in business performance – expressed as a loss of hardware Availability
  • Wasted CapEx – the direct result of stranded Capacity
  • An unnecessary increase in OpEx – occurring due to loss of Cooling Efficiency
  • The end point for the business is that these issues result in Increased Cost

You can take solace however from the fact that many organizations have overcome these issues. There does NOT have to be an engineering gap. While we cannot map all of the complex process at play in the data center on a scrap of paper, in our heads or even in a traditional DCIM tool, we can predict them using engineering simulation. Through engineering simulation it’s possible to create a 3D model of the data center, simulate the power systems and run computational fluid dynamics modelling to predict cooling.

Example: Deploying High Density IT & Engineering impact

 

 

Change – Made Safe

This process of engineering simulation gifts to the data center manager an apparently magical tool. It gives them a safe, off-line environment in which they can explore and test the changes required within the data center. So when the demands of the business come flooding in, resulting in huge variation in the workloads being handled, the data center can perform with absolute resilience. Change is no longer the enemy.

And so we have introduced one of the great challenges of our industry today, along with a goal that any of us can achieve. It’s also important to remember that this isn’t just a vision of the future; it’s a vision for today, and we call it The Fluid Data Center.

First posted on Data Centre Network 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , | Leave a comment

Applying the Scientific Method in Data Center Management

DCX_Monitor_EmailResolution

Data center management isn’t easy. Computing deployments change daily, airflows are complicated, misplaced incentives cause behaviors that are at odds with growing company profits, and most enterprise data centers lag far behind their cloud-based peers in utilization and total cost of ownership.

One reason why big inefficiencies persist in enterprise data centers is inattention to what I call the three pillars of modern data center management: tracking (measurement and inventory control), developing good procedures, and understanding physical principles and engineering constraints.

Another is that senior management is often unaware of the scope of these problems. For example, a recent study I conducted in collaboration with Anthesis and TSO Logic showed that 30 percent of servers included in our data set were comatose: using electricity but delivering no useful information services. The result is tens of billions of dollars of wasted capital in enterprise data centers around the world, a result that should alarm any C-level executive. But little progress has been made on comatose servers since the problem first surfaced years ago as the target of the Uptime Institute’s server roundup.

Read more: $30B Worth of Idle Servers Sit in Data Centers

One antidote to these problems is to bring the scientific method to data center management. That means creating hypotheses, experimenting to test them, and changing operational strategies accordingly, in an endless cycle of continuous improvement. Doing so isn’t always easy in the data center, because deploying equipment is expensive, and experimentation can be risky.

Is there a way to experiment at low risk and modest cost in data centers? Why yes, there is. As I’ve discussed elsewhere, calibrated models of the data center can be used to test the effects of different software deployments on airflow, temperatures, reliability, electricity use, and data center capacity. In fact, using such models is the only accurate way to assess the effects of potential changes in data center configuration on the things operators care about, because the systems are so complex.

Recently, scientists at the State University of New York at Binghamton created a calibrated model of a 41-rack data center to test how accurately one type of software (6SigmaDC) could predict temperatures in that facility and to create a test bed for future experiments. The scientists can configure the data center easily, without fear of disrupting mission critical operations, because the setup is solely for testing. They can also run different workloads to see how those might affect energy use or reliability in the facility.

Read more: Three Ways to Get a Better Data Center Model

Most enterprise data centers don’t have such flexibility, but they can cordon off sections of their facility as a test bed, as long as they have sufficient scale. For most enterprises, such direct experimentation is impractical. What almost all of them can do is create a calibrated model of their facility and run the experiments in software.

What the Binghamton work shows is that experimenting in code is cheaper, easier, and less risky than deploying physical hardware, and just about as accurate (as long as the model is properly calibrated). In their initial test setup, they reliably predicted temperatures with just a couple of outliers for each rack, and those results could no doubt be improved with further calibration. They were able to identify the physical reasons for the differences between modeling results and measurements, and once identified, the path to a better and more accurate model is clear.

We need more testing labs of this kind, applied to all modeling software used in data center management, to assess accuracy and improve best practices, but the high-level lesson is clear: enterprise data centers should use software to improve their operational performance, and the Binghamton work shows the way forward. IT is transforming the rest of the economy, why not use it to transform IT itself?

About the author: Jonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University and is one of the leading international experts on the energy use and economics of data centers.

Sign up for Jonathan Koomey’s online course,Modernizing Enterprise Data Centers for Fun and Profit. More details below.

Originally published on Data Center Knowledge 

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , | Leave a comment

Belden to Offer Future Facilities CFD Modeling for Data Centers

Belden-Inc.-Logo---Large-JPG_Original_40431Screenshot 2016-03-09 12.37.54

Software assesses data center impacts to mitigate risk and increase efficiency

St. Louis, Missouri – March 8, 2016 – Belden Inc. a global leader in signal transmission solutions for mission-critical applications, announces new simulation software capability to assist data center managers with current and future data center needs.

Through engineering simulation, Belden uses CFD (computational fluid dynamics) modeling to explain to data center managers the impacts of airflow and thermal management on data centers and energy efficiency. With this engineering simulation, Belden helps customers bridge IT and facilities; facility engineers can visualize and provision power and cooling, and IT planners can deploy IT systems where power and cooling distribution are available.

Using 3D and 2D renderings, this software conducts CFD analysis for predictive modeling, what-if and capacity planning. It also provides information on the current state of a data center, as well as predictions about the potential future state of a data center. The best way to design and operate the space can be determined with this information before equipment is actually put in place.

By making CFD modeling part of the data center workflow, users can:

  • Reduce capital expenditures and improve data center design
  • Predict thermal and electrical resilience of each IT device
  • Run failure scenarios to confirm that resiliency will be maintained
  • Compare different vendor choices
  • Predict, plan, identify and prevent hotspots
  • Increase usable capacity without sacrificing uptime
  • Validate thermal solutions

“We’re excited to offer new technology that empowers data center managers to make decisions based on engineering simulation,” says Michael Peterson, technology and applications manager, data centers, Belden. “When we can work with clients to tell them whether a planned change can be made without negatively impacting uptime, energy efficiency or productivity, they’ll be able to avoid critical events and make choices more confidently.”

To offer this capability, Belden partners with Future Facilities and its suite of data center products. Belden and Future Facilities work together to offer a deeper level of understanding about data centers. “This offering will bring new levels of productivity to data center design, troubleshooting and operations for Belden customers. We’re excited to partner with Belden to offer data center manager’s best-in-class CFD and engineering simulation,” says Robert Schmidt, sales manager at Future Facilities.

Only Belden’s innovative enterprise connectivity solutions take a universal approach to customers’ enterprises, resolving signal transmission needs with IP- and legacy-based solutions

that enable a smooth migration to convergence. Belden’s extensive portfolio spans LAN, data centers, building automation and security and access control to keep information running smoothly. Outstanding global service and support capabilities and application-specific warranty programs complete Belden’s unique offering.

About Belden

Belden Inc., a global leader in high-quality, end-to-end signal transmission solutions, delivers a comprehensive product portfolio designed to meet the mission-critical network infrastructure needs of industrial, enterprise and broadcast markets. With innovative solutions targeted at reliable and secure transmission of rapidly growing amounts of data, audio and video needed for today’s applications, Belden is at the center of the global transformation to a connected world. Founded in 1902, the company is headquartered in St. Louis and has manufacturing capabilities in North and South America, Europe and Asia. For more information, visit us at http://www.belden.com; follow us on Twitter @BeldenInc

About Future Facilities

For more than a decade, Future Facilities has provided engineering simulation software and consultancy services to the world’s largest data center owner-operators and the industry’s leading consultancies. With global offices, its software and services are relied on to deliver unique insight into the current and future performance of mission-critical data centers. Additional information can be found at http://www.futurefacilities.com; follow us on Twitter @6SigmaDC

http://www.connectionsplus.ca/belden-launches-cfd-modeling-offering-data-centres/1002878689/

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , | Leave a comment

The Fluid #DataCenter : Removing fear from the data center

fluid-doqlae

 

 

 

Jon Leppard, Director of Future Facilities, believes that there is a real risk that our industry will become overcome by fear in the coming five years.

It’s not often we use the word ‘fear’ when talking about the mechanical and engineering-oriented world of the data center. But the reality is that the data center industry has become a market riddled with the fear of things going beyond our control. The solution, I believe, is to reconceptualize how we manage our data centers – to achieve something altogether more responsive, risk-free and predictable. I call this the ‘Fluid Data Center’. However achieving the Fluid Data Center requires us to address a few challenges which have become the norm.

Fighting the fear factor

The data center is usually called a ‘critical facility,’ and we should remember why this is – its goal is to answer demands for compute capability (applications, databases etc) as and when the business needs it. Organizations today are subject to more rapid processes of change, be that technological or commercial, than ever before. This means that the demands they put on their support infrastructure have become equally exposed to the challenge of rapid change.

For the businesses with which I’ve worked, this has been a major driver of ‘fear’ – and it continues to be so for most of today’s data centre managers. Visions of software defined data centers (SDDCs), of more homogenized facilities and highly adaptable containerized pods, will all be more adept and attuned to responding to this intense type of workload change. But in the here and now, we have a situation where most organizations are years away from realizing these future data center designs, irrespective of how much planning or discussion may be underway.

The reality on the ground is that current facilities are managed as what we call ‘Static Data Centers.’ These are a legacy of a period where incremental purchase and installation of units of compute was the norm.

In last decade we’ve seen this picture become very much more complex in terms of the diversity of installed technologies, and fragmentation is all too common. Managing risk has therefore become a process of tension between IT and Facilities:

ACE-triangle
IT teams have over-provisioned and under-utilised to ensure that they don’t allow their applications to fall over Facilities teams have over-engineered and over-cooled to minimise downtime

 

 

An inflection point in the data center
The question is – why does this really matter? What does this mean to the business? I believe we have reached an inflection point, beyond which there is certainly no going back, and see two reasons for data center (and business) management to pay attention:

The variability and frequency of change in the workloads imposed by the business on critical data center facilities is now often quite extreme. We’re in an unparalleled era of interaction between lines of business (LoBs) and technology, and their demands are only increasing.

The high cost of compute ($/W) means that there is a corresponding high cost of delivering business outcomes. LoBs probably don’t think about the implications of the decision to launch a new app or create a new database in technology terms. But the CIO and his companions on the Board will be acutely aware of the vast sinkhole of expense that the IT infrastructure has becom.

This visibility and pressure from above / from LoBs has introduced new ‘fear lines’ for Facilities and IT teams. These are forcing data center managers to work within tighter margins, pushing them from a mode of “keeping the lights on at any cost” to “optimizing how to keep the lights on.” So, a new challenge exists: “How can we re-balance the seesaw without introducing risk?”

futurefacilities-3

Rebalancing the see-saw
An efficient data center is one where power and cooling supplied by the Facility balances the IT demand, but inherent within this is the need to manage change. Unless you can remove the fear of change, and make it a risk-free, easy-to-deliver process, you will remain in a Static Data Center.

At the root of this problem is the decision making gap between the IT and the Facility. Although seen by the business as a singular data center that deliver the desired outcomes, under the bonnet we all know that we have two silos trying to keep the scales in balance. The issue lies in the fact that both are currently operating independently. Bringing the IT and Facility together is a necessity if the operator is to balance supply and demand.

Many organizations have started to invest in DCIM, and the right match of tools can significantly help to cross the data and process gaps that are found within any data center facility. But the use of DCIM alone creates a knowledge gap which is central to the fear that is growing within our industry. The majority of facilities today are managed with the operator making decisions without knowledge of the engineering impact they may have on the other side of the gap.

Bridging the engineering gap
The implications of failing to understand this engineering gap are significant. Not just at a bits and bytes level, but on an operational and commercial basis for the business. Put simply, if you do not understand the engineering implications of changes being made with the data center – and remember we are all having to deal with increasing rates of change – you risk:
Loss of hardware availability – which can be realized as a loss of business
Stranded capacity – which in turns leads to wasted CAPEX
Loss of cooling efficiency – which results in increased OPEX
And ultimately all this leads to increased cost
The vital point to note here is that there is no reason for the engineering gap to remain open. In many ways we can boil the data center down to the physics that operates within it. It’s a series of many, many calculations that represent mechanical, electrical and engineering processes in operation. Obviously we can not calculate all of these by hand, and therefore our mental conceptualizations of the facility will inevitably fail. But the physics that operates within the data center can be predicted – by using engineering simulation.

Engineering simulation comprises:

3D Modelling to represent the data center
Power system simulation (PSS)
Computational fluid dynamics (CFD) to predict cooling
When brought together under the collective function of engineering simulation, these provide a safe, off-line environment to test the changes required within the data center. You can therefore respond to the demands of the business, having tested all the permutations of those changes in an exact replication of your data center. This is what I call the Fluid Data Center.

In summary, the Fluid Data Center concept means operating a data center that is as fluid as the business it serves. Best of all, this approach doesn’t rely on delivering future visions of the data center environment. It’s entirely applicable to today’s heterogeneous facilities (though it’s also relevant for the design of your future data centers).

By creating a Fluid Data Center, IT and Facilities teams can work together to respond to the constantly changing needs of the business, without fear.

(C) – Read more at the original source:
http://www.publishing.ninja/V2/page/1909/119/19/1

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , | Leave a comment

#DCIM Correct use of CFD & Engineering Simulation for #DataCenter Operation

DCX_Monitor_EmailResolution

CFD and Simulation paves the way for a predictive deployment process that exploits all of the good data from DCIM, but puts the operator on the front foot. It doesn’t even require a significant departure from current working practices:

  1. Predict – Predict the impact of the proposed change using the verified computer model.
  2. Decide – Use the results to guide the best deployment location based on operational considerations
  3. Deploy – Install the equipment and power it up.
  4. Monitor – Watch the live data from the facility for any problems.

DCIM and Monitoring is still an integral part of the process, but it now works together with CFD and Engineering Simulation to give data center operators complete visibility into the state of their data center, now and in the future.

 

Take a look at this recent series by Ehsaan Farsimadan, CEng PhD

DCIM Part 3: Using CFD Alongside DCIM

Read the full article for Part 3 at http://www.datacenterjournal.com/24629-2/

This article is part 3 of the three-part series examining the main challenges of acquiring, implementing and using data center infrastructure management (DCIM). Part 1 presented a broad review on the different functions of DCIM in light of the operational challenges in the data center. Part 2 presented a possible method to expand DCIM from a data center management tool to manage IT, capacity, energy and cost. This final part addresses the applications of computational fluid dynamics (CFD) alongside DCIM.

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , | Leave a comment