Future models for the data center are highly dynamic; workloads easily transferred and managed across a highly homogenized facility, that is effortlessly managed by sophisticated software. Or at least so the prevailing concepts within the industry would suggest. But what does this mean for today’s data centers – facilities that may be in operation for ten years or more? This question is answered by Matt Warner, Development Manager at Future Facilities.
The Future of Data Centers, Delivered Today
I have a goal for every data center manager: make your facility more responsive, risk-free and predictable. It sounds like a lot to ask, but it’s an achievable goal. I know because I’ve seen a lot of businesses do it.
This vision for better data centers in the here and now is encompassed within what we at Future Facilities call ‘The Fluid Data Center.’ However our industry is not inherently fluid, and there are a great number of data centers out there yet to make this change. Instead, most data centers exist in a different state.
When we speak to IT and facilities managers across our industry, there is a recurring pattern – current facilities are being managed as what we call ‘Static Data Centers.’ Before I explain what this means, it’s important to note that this is not a criticism. The reality is that there are a huge number of pressures in the management of the data center that traditionally have all but demanded that facilities are ‘static.’
So what is a static data center? Typically it’s characterized as a facility in which there is a reluctance to make changes, and where new deployments or reconfigurations of hardware or software are slow and risky. A static facility is usually managed either looking backwards to past trends, or forwards with the educated guesswork of experienced data center professionals. It is all but impossible to make this sort of facility highly flexible while remaining absolutely resilient.
Leaving Static Facilities Behind
These static data centers are in so many ways a legacy. Historically, the incremental purchase and installation of hardware, initiated in response to erratic demands from the business, was the norm. The last ten years or so have seen this situation become far more complex for most businesses. Their data centers are diverse, with a wide variety of installed technologies, frequently leading to fragmentation.
This conspiracy of factors has results in two overriding, and highly detrimental trends:
- Most IT teams have over-provisioned and under-utilized within their data center to safeguard the delivery of compute power for which they’re ultimately responsible. This strategy has been deployed to ensure that they don’t allow their applications to fall over
- Facilities teams, similarly, have made their own sacrifices to safeguard the resilience of their mission critical facility. Typically they have over-engineered and over-cooled to minimize downtime
So how do we make a data center more ‘Fluid’? We start by reminding ourselves why – outside of the operational parameters and SLA to which we’ve agreed – we care about making our data center more flexible. Ultimately it comes down to business facilitation.
A Critical Change
The data center is often referred to as a ‘critical facility’ within the organization. Indeed today its role is more important than ever. The business itself is subject to more rapid changes as a result of rapidly advancing technology, demands from consumers for immediate responses and hugely competitive industry landscapes. This is an era of unparalleled interaction between lines of business and technology, and the demands on the data center are only increasing. Hence the idea of a ‘safe but static’ data center (which in honesty was never a reality anyway) becomes anachronistic. There’s just no place for it today.
What’s more, in the face of this more dynamic demand, budget pressures on IT and Facilities remain high. A static data center configuration is wasteful on resources. Therefore the cost of compute ($/W) results in a correspondingly high cost of delivering business outcomes. These sorts of difficulties may not arise in the cutting edge of Marketing discussions or at the latest Sales conference, but rest assured that the CIO and the Board upon which he sits are looking at those numbers with watchful eyes.
Gone is any remnant of “keeping the lights on at any cost.” Today the mission is “optimizing how to keep the lights on.” And optimizing means combining both performance and risk management.
On an operational level, an efficient data center is one where power and cooling supplied by the Facility balances the IT demand. In more commercial terms, the data center must also have the ability to maintain this balance while being completely flexible to the needs of the business. That means changing things – often, and with no risk of downtime.
In the current model of the data center, this doesn’t work, primarily because of a decision making gap between IT and Facilities. The issue lies in the fact that both are currently operating independently. Many organizations have deployed DCIM technology with a goal of crossing the data and process gaps that are found within any data center facility. This is a positive step, but doesn’t cover all bases. In fact, the majority of facilities today are managed whereby the operator makes decisions without any clear insight into the engineering impact they may have on the other side of the gap. In other words, IT doesn’t know how it will affect Facilities, and vice versa.
Data centers today therefore suffer from an ‘engineering gap.’
We Must Close the Engineering Gap
This engineering gap isn’t just a nice buzzword. It’s a real thing. And a real problem too. Within almost any facility you choose to inspect, you’ll find this gap, and it’s exposing these organizations to risk of:
- A loss in business performance – expressed as a loss of hardware Availability
- Wasted CapEx – the direct result of stranded Capacity
- An unnecessary increase in OpEx – occurring due to loss of Cooling Efficiency
- The end point for the business is that these issues result in Increased Cost
You can take solace however from the fact that many organizations have overcome these issues. There does NOT have to be an engineering gap. While we cannot map all of the complex process at play in the data center on a scrap of paper, in our heads or even in a traditional DCIM tool, we can predict them using engineering simulation. Through engineering simulation it’s possible to create a 3D model of the data center, simulate the power systems and run computational fluid dynamics modelling to predict cooling.
Example: Deploying High Density IT & Engineering impact
Change – Made Safe
This process of engineering simulation gifts to the data center manager an apparently magical tool. It gives them a safe, off-line environment in which they can explore and test the changes required within the data center. So when the demands of the business come flooding in, resulting in huge variation in the workloads being handled, the data center can perform with absolute resilience. Change is no longer the enemy.
And so we have introduced one of the great challenges of our industry today, along with a goal that any of us can achieve. It’s also important to remember that this isn’t just a vision of the future; it’s a vision for today, and we call it The Fluid Data Center.
First posted on Data Centre Network