The Fluid #DataCenter : Removing fear from the data center

fluid-doqlae

 

 

 

Jon Leppard, Director of Future Facilities, believes that there is a real risk that our industry will become overcome by fear in the coming five years.

It’s not often we use the word ‘fear’ when talking about the mechanical and engineering-oriented world of the data center. But the reality is that the data center industry has become a market riddled with the fear of things going beyond our control. The solution, I believe, is to reconceptualize how we manage our data centers – to achieve something altogether more responsive, risk-free and predictable. I call this the ‘Fluid Data Center’. However achieving the Fluid Data Center requires us to address a few challenges which have become the norm.

Fighting the fear factor

The data center is usually called a ‘critical facility,’ and we should remember why this is – its goal is to answer demands for compute capability (applications, databases etc) as and when the business needs it. Organizations today are subject to more rapid processes of change, be that technological or commercial, than ever before. This means that the demands they put on their support infrastructure have become equally exposed to the challenge of rapid change.

For the businesses with which I’ve worked, this has been a major driver of ‘fear’ – and it continues to be so for most of today’s data centre managers. Visions of software defined data centers (SDDCs), of more homogenized facilities and highly adaptable containerized pods, will all be more adept and attuned to responding to this intense type of workload change. But in the here and now, we have a situation where most organizations are years away from realizing these future data center designs, irrespective of how much planning or discussion may be underway.

The reality on the ground is that current facilities are managed as what we call ‘Static Data Centers.’ These are a legacy of a period where incremental purchase and installation of units of compute was the norm.

In last decade we’ve seen this picture become very much more complex in terms of the diversity of installed technologies, and fragmentation is all too common. Managing risk has therefore become a process of tension between IT and Facilities:

ACE-triangle
IT teams have over-provisioned and under-utilised to ensure that they don’t allow their applications to fall over Facilities teams have over-engineered and over-cooled to minimise downtime

 

 

An inflection point in the data center
The question is – why does this really matter? What does this mean to the business? I believe we have reached an inflection point, beyond which there is certainly no going back, and see two reasons for data center (and business) management to pay attention:

The variability and frequency of change in the workloads imposed by the business on critical data center facilities is now often quite extreme. We’re in an unparalleled era of interaction between lines of business (LoBs) and technology, and their demands are only increasing.

The high cost of compute ($/W) means that there is a corresponding high cost of delivering business outcomes. LoBs probably don’t think about the implications of the decision to launch a new app or create a new database in technology terms. But the CIO and his companions on the Board will be acutely aware of the vast sinkhole of expense that the IT infrastructure has becom.

This visibility and pressure from above / from LoBs has introduced new ‘fear lines’ for Facilities and IT teams. These are forcing data center managers to work within tighter margins, pushing them from a mode of “keeping the lights on at any cost” to “optimizing how to keep the lights on.” So, a new challenge exists: “How can we re-balance the seesaw without introducing risk?”

futurefacilities-3

Rebalancing the see-saw
An efficient data center is one where power and cooling supplied by the Facility balances the IT demand, but inherent within this is the need to manage change. Unless you can remove the fear of change, and make it a risk-free, easy-to-deliver process, you will remain in a Static Data Center.

At the root of this problem is the decision making gap between the IT and the Facility. Although seen by the business as a singular data center that deliver the desired outcomes, under the bonnet we all know that we have two silos trying to keep the scales in balance. The issue lies in the fact that both are currently operating independently. Bringing the IT and Facility together is a necessity if the operator is to balance supply and demand.

Many organizations have started to invest in DCIM, and the right match of tools can significantly help to cross the data and process gaps that are found within any data center facility. But the use of DCIM alone creates a knowledge gap which is central to the fear that is growing within our industry. The majority of facilities today are managed with the operator making decisions without knowledge of the engineering impact they may have on the other side of the gap.

Bridging the engineering gap
The implications of failing to understand this engineering gap are significant. Not just at a bits and bytes level, but on an operational and commercial basis for the business. Put simply, if you do not understand the engineering implications of changes being made with the data center – and remember we are all having to deal with increasing rates of change – you risk:
Loss of hardware availability – which can be realized as a loss of business
Stranded capacity – which in turns leads to wasted CAPEX
Loss of cooling efficiency – which results in increased OPEX
And ultimately all this leads to increased cost
The vital point to note here is that there is no reason for the engineering gap to remain open. In many ways we can boil the data center down to the physics that operates within it. It’s a series of many, many calculations that represent mechanical, electrical and engineering processes in operation. Obviously we can not calculate all of these by hand, and therefore our mental conceptualizations of the facility will inevitably fail. But the physics that operates within the data center can be predicted – by using engineering simulation.

Engineering simulation comprises:

3D Modelling to represent the data center
Power system simulation (PSS)
Computational fluid dynamics (CFD) to predict cooling
When brought together under the collective function of engineering simulation, these provide a safe, off-line environment to test the changes required within the data center. You can therefore respond to the demands of the business, having tested all the permutations of those changes in an exact replication of your data center. This is what I call the Fluid Data Center.

In summary, the Fluid Data Center concept means operating a data center that is as fluid as the business it serves. Best of all, this approach doesn’t rely on delivering future visions of the data center environment. It’s entirely applicable to today’s heterogeneous facilities (though it’s also relevant for the design of your future data centers).

By creating a Fluid Data Center, IT and Facilities teams can work together to respond to the constantly changing needs of the business, without fear.

(C) – Read more at the original source:
http://www.publishing.ninja/V2/page/1909/119/19/1

About DCIMdatacenter

http://www.linkedin.com/in/rfschmidt
This entry was posted in Data Center DCIM Datacenter Datacenters Datacentre and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s