The End of Power, Cooling and Cabling Centers; Beginning of “Data” Centers

Featured Image

The application ecosystem

The data center world has heard and wasted enough resources over the wrong emphasis placed on the facility aspects of the data center, while not witnessed merely enough attention on the data center’s merit, purpose and true output. So often, we are overwhelmed by the extensive focus on “power” (UPS, ATS, Generators, Busbars, etc.), or “cooling” (free-cooling, hybrid cooling, CRACs, CRAHs, chillers, liquid cooling, in-rack cooling, in-row cooling, etc.), structured “cabling”, physical “security”, site and “building”, and so forth. But despite their importance, these parameters do not deal with “data” directly. What has been ignored for decades and unfortunately continues to be ignored in most data center principles and guidelines, is the Information Technology (IT) side of the equation, which is the actual infrastructure dealing with the data and application. Moreover, virtualization and cloud are other critical parameters that are practically above and beyond the knowledge and vision of facility-minded “data” center practitioners.

Information Technology (IT), is the actual infrastructure that deals directly with the data and application.

There is a story behind this backwardness. It all started when people used to call IBM and asked it to build a data center for them. Whatever IBM said or did was fine with people, because the bulk of their data center and the core of their investment were vested into mainframes. Then, as the technology progressed, other IT vendors such as HP, Cisco, Microsoft, Sun Microsystems (Oracle), came to the picture and said “wait a minute, we can build data centers for you too”. In time, the industry realized these IT vendors don’t have a clue on how to power and cool data centers and went to the other extreme. The extreme that we are living in today is the power and cooling syndrome that is wasting many talents and resources in the data center arena.

People build a warehouse and fill it up with a bunch of generators, chillers and other facilities and then wonder how they’re going to deploy their application on top the infrastructure that they just invested millions to build. The truth of the matter is that power, cooling, cabling, safety, security, civil, architecture, network, storage, etc. are all important, if not as important. They all need to work in harmony and in a proper atmosphere, an ecosystem, that justifies their existence: the application ecosystem.

When we came up with the idea of the application ecosystem at IDCA, people were astonished: “what does application have to do with data centers?” as if they literally had forgotten the whole point of building data centers. But, the impulse was due. And thus, the notion of “application-centric” data centers was invented.

According to IDCA, the purpose of data center is delivering “application” and the definition of data center is “the infrastructure that supports the application ecosystem”. What is the application ecosystem? Application Ecosystem is an environment, both physical and virtual, that allows the application to live and operate as it is intended.

Application Ecosystem, is an environment, both physical and virtual, that allows the application to live and operate as it is intended. Application can live, exist, “breathe” and interact only in an ecosystem that supports its purpose. Such an ecosystem is vital for application delivery. Data center caters to that need by being the infrastructure that supports the application ecosystem (AE) and its wellbeing.

If you have a “data” center mindset, you would first analyze the nature and requirements of your application, then build the data center around the application; as apposed to building the data center then try to somehow fit your application around it. Note: we are not advocating that facilities such as power and cooling are unimportant. They are critical. But they have to be designed around and suited for the applications they will be accommodating. Furthermore, as we merge into the world of diversification across the physical sites and implementing true clouds at the topology layer of the application ecosystem, the more the availability role of site-specific power and cooling infrastructure will diminish, and the more the efficiency criteria for such infrastructure will amplify.

How many data centers have we visited that have 25% to 50% of a 42U rack filled at most, simply because they ran out of power and cooling capacity? How much real estate and human capital is being wasted in the United States and the rest of the world due to this seemingly simple issue? This is not a cooling and power issue!   This is the issue of not building application-centric data centers and instead building data centers that are unnecessarily huge, inefficient, and not fit for the applications they are hosting. On the other extreme, there are those who pileup Kw after Kw into every single rack under the tagline of “high-density” without regard to basic safety parameters and operational conduciveness of such implementations (I will write on the high-density computing, power and cooling later). As a result, with the wrong perspective, we end up with either centers that are built for anything but “data” and application, or centers that host applications in not an optimally safe and sustainable “ecosystem”.

For as long as we build power and cooling centers, we cannot accept “data” center results from our investment.

For as long as we build power and cooling centers, we cannot accept “data” center results from our investment. This mindset is obsolete for the needs of today and surely inept for the vital necessities of tomorrow.

List of Comments

No comments yet.

Leave A Comment