Showing posts with label facilities. Show all posts
Showing posts with label facilities. Show all posts

Tuesday, February 04, 2020

A New Status Quo for Data Centers--Seamless Communication From Core to Cloud to Edge

https://www.vertiv.com/en-us/about/news-and-insights/articles/blog-posts/just-how-reliable-is-the-digital-it-infrastructure-that-supports-your-healthcare-system-operations/

A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data center strategies.

Gardner
As 2020 ushers in a new decade, the forces shaping data center decisions are extending compute resources to new places. With the challenging goals of speed, agility, and efficiency, enterprises and service providers alike will be seeking new balance between the need for low latency and optimal utilization of workload placement.

Hybrid models will therefore include more distributed, confined, and modular data centers at or near the edge.

These are but some of a few top-line predictions on the future state of the modern data center design. Stay with us as we examine, with two leading IT and critical infrastructure executives, how these data center variations nonetheless must also interoperate seamlessly from core to cloud to edge.


Here to help us learn more about the state of data centers in 2020 is Peter Panfil, Vice President of Global Power at VertivTM. Welcome, Peter.

Peter Panfil: How are you, Dana?

Gardner: I’m doing great. We’re also here with Steve Madara, Vice President of Global Thermal at Vertiv. Welcome, Steve.

Steve Madara: Thank you, Dana.

Gardner: The world is rapidly changing in 2020. Organizations are moving past the debate around hybrid deployments, from on-premises to public clouds. Why do we need to also think about IT architectures and hybrid computing differently, Peter?

Moving to the edge, with momentum 

https://www.linkedin.com/in/peter-panfil-03197766/
Panfil
Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new generation of data centers. We have been living with distributed IT, client-server data centers moving to cloud, either a public cloud or a private cloud.

But what we are seeing is the evolution of an edge-to-core, near-real-time data center generation. And it’s being driven by devices everywhere, the “connected-all-the-time” model that all of us seem to be going to.

And so, when you are in a near-real-time world, you have to have infrastructure that supports your near-real-time applications. And that is what the technology folks are facing. I refer to it as a pack of dogs chasing them -- the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. This is what’s changing how they approach their critical infrastructure space.

Gardner: And so, a new equilibrium is emerging. How is this different from the past?

Madara: If we go back 20 years, everything was centralized at enterprise data centers. Then we decided to move to decentralized, and then back to centralized. We saw a move to colocation as people decided that’s where they could get lower cost to run their apps. And then things went to the cloud, as Peter said earlier.

https://www.linkedin.com/in/steve-madara-80ba1214/
Madara
And now, we have a huge number of devices connected locally. Cisco says by late 2020 that it’s going to have 23 billion connected devices, and over half of those are going to be machine-to-machine communications, which, as Peter mentioned earlier, the latency is going to be very, very critical.

An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking place with the low latency that you have in stock market trading. I think we are going to see more of that moving to the edge. The edge is more like a smart rack or smart row deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be able to be throughout large cities. There could be 20 or 30 of these edge data center sites hosting different applications for customers.

This move to the edge is also going to provide IT resources in a lot of underserved markets that don’t yet have pervasive compute, especially in emerging countries.

Gardner: Why is speed so important? We have been talking about this now for years, but it seems like the need for speed to market and speed to value continues to ramp up. What’s driving that?

Panfil: There is more than one kind of speed. There is speed of response of the application, that’s something that all of us demand -- speed of response of the applications. I have to have low latency in the transactions I am performing with my data or with my applications. So there is the speed of the actual data being transmitted.

There is also speed of deployment. When Steve talked earlier about centralized cloud deployments in these core data centers, your data might be going over a significant distance, hopping along the way. Well, if you can’t live with that latency that gets inserted, then you have to take the IT application and put it closer to the source and consumer of the data. So there is a speed of deployment, from core to edge, that happens.

And the third type of speed is you have to have low-first-cost, high-asset-utilization, and rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for the IT applications are.
So when we mean speed, I often say it's speed, speed, and speed. First it's the data speed, then deploying fast, and then at scale at business-friendly cost and reliability.

So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale needed for the applications, and lastly at a cost and reliability that makes it tolerable for the businesses.

Gardner: So I guess it’s speed-cubed, right?

Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our customers said “speed,” we wouldn’t have to work anymore. They are consumed with the different speeds that they have to deal with -- and it’s really the demands of their customers.

Gardner: Vertiv for years has been looking at the data center of the future and making some predictions around what to expect. You have been rather prescient. To continue, you have now identified several areas for 2020, too. Let’s go through those trends.

Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify that, and what do you mean?

The future goes hybrid 

Madara: If we look at the history of going from centralized to decentralized, and going to colocation and cloud applications, it shows the ongoing evolution of Internet of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of that data is generated and will need to be processed locally. A lot of that is from machine-to-machine applications.

https://www.vertiv.com/
So when we now talk about hybrid, we have to get very, very close to the source, as far as the processing is concerned. That’s going to be a large-scale evolution that’s going to drive the need for hybrid applications. There is going to be processing at the edge as well as centralized applications -- whether it’s in a cloud or hosted in colocation-based applications.

Panfil: Steve, you and I both came up through the ranks. I remember when the data closet down the hall was basically a communications matrix. Its intent was to get communications from wherever we were to wherever our core data center was.

Well, the cloud is not going away. Number two, enterprise IT is not going away. What the enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put it in an edge data center. I am going to put the compute power as close to my consumer of that data and that application as I possibly can. And then I am going to figure out where the rest of it’s going to go.”
If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can't, I might even break up my enterprise data center into small or micro data centers that give me even better responses.

“If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.”

Dana, it’s interesting, there was a recent wholesale market summary published that said the difference between the smaller and the larger wholesale deals widened. So what that says is the large wholesale deals are getting bigger, the small wholesale deals are getting smaller, and that the enterprise-based demand, in deployments under 600 kilowatts, is focused on low-latency and multi-cloud access.

That tells us that our customers, the users of that critical space, are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy.

Gardner: Our second trend gets back to the speed-cubed notion. I have heard people describe this as a new arms race, because while it might be difficult to differentiate yourself when everyone is using the same public cloud services, you can really differentiate yourself on how well you can conduct yourself at speed.

What kinds of capabilities across your technologies will make differentiation around speed work to an advantage as a company?

The need for speed 

Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big will eat the small -- it’s that the fast will eat the slow. And any advantage that you can get in speed of applications, speed of deployment, deploying those IT assets -- or morphing the data center infrastructure or critical space infrastructure – helps improve capital efficiency. What many customers tell us is that they have to shorten the period of time between deciding to spend money on IT assets and the time that those asset start creating revenue.

They want help being creative in lowering their first-cost, in increasing asset utilization, and in maintaining reliability. If, holy cow, my application goes down, I am out of business. And then they want to figure out how to manage things like supply chains and forecasting, which is difficult to do in this market, and to help them be as responsive as they can to their customers.

Madara: Forecasting and understanding the new applications -- whether it’s artificial intelligence (AI) or 5G -- the CIOs need to decide where they need to put those applications whether they should be in the cloud or at the edge. Technology is changing so fast that nobody can predict far out into the future as far as to where I will need that capacity and what type of capacity I will need.

So, it comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance. Again, I don’t want to spend the capital, because I may put it in the wrong place. So it’s got to be about tying the demand with the supply, and that’s what’s key as far as the infrastructure.

https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/proliferation-of-hybrid-computing-models-among-2020-data-center-trends-identified-by-vertiv-experts/

And the other element I see is technology is changing fast, even on the infrastructure side. For our equipment, we are constantly making improvements every day, making it more efficient, lower cost, and with more capability. And if you put capacity in today that you don’t need for a year or two down the road, you are not taking advantage of the latest, greatest technology. So really it’s coupling the demand to the actual supply of the infrastructure -- and that’s what’s key.

Another consideration is that many of these large companies, especially in the colocation market, have their financial structure as a real estate investment trust (REIT). As a result, they need to tie revenue with expenses tighter and tighter, along with capital spending.

Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically to be able to address this demand.

In previous generations, for example, the uninterruptible power supply (UPS) was built as a complete UPS. The new generation is built as a power converter, plus an I/O section, plus an interface section that can be rapidly configured to the customer, or, in some cases, put into a vendor-managed inventory program. This approach allows us to respond to the market and customers quicker.

We were forced to change our business model in such a way that we can respond in real time to these kinds of capacity-demand changes.

Madara: And to add to that, we have to put together more and more modules and solutions where we are bundling the equipment to deliver it faster, so that you don’t have to do testing on site or assembly on site. Again, we are putting together solutions that help the end-user address the speed of the construction of the infrastructure.

https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/proliferation-of-hybrid-computing-models-among-2020-data-center-trends-identified-by-vertiv-experts/

I also think that this ties into the relationship that the person who owns the infrastructure has with their supplier base. Those relationships have to build in, as Peter mentioned earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.

Gardner: In summary so far, we have this need for speed across multiple dimensions. We are looking at more hybrid architectures, up and down the scale -- from edge to core, on-premises to the cloud. And we are also looking at crunching more data and making real-time analytics part of that speed advantage. That means being able to have intelligence brought to bear on our business decisions and making that as fast as possible.

So what’s going on now with the analytics efficiency trend? Even if average rack density remains static due to a lack of space, how will such IT developments as high performance computing (HPC) help make this analysis equation work to the business outcome’s advantage?

High-performance, high-density pods 

Madara: The development of AI applications, machine learning (ML), and what could be called deep learning are evolving. Many applications are requiring these HPC systems. We see this in the areas of defense, gaming, the banking industry, and people doing advanced analytics and tying it to a lot of the sensor data we talked about for manufacturing.

It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and these are often unique applications. What I hear in large data centers, especially from the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW racks -- but they only have three or four of these racks in the whole data center.
The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. They are going to need to decide how they are going to facilitize for that type of equipment.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. And if they are in their own facility, if it’s an enterprise that has its own data center, they will need to decide how they are going to facilitize for that type of equipment.

A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be bringing in the future a couple of racks that are very high density. A lot of these multi-tenant data centers are saying, ‘Oh, how do I provision for these, because my data center was laid out for this average of maybe 8 kW per rack? How do I manage that, especially for data centers that didn’t previously have chilled water to provide liquid to the rack?’”

We are now seeing a need to provide chilled water cooling that would go to a rear door heat exchanger on the back of the rack. It could be chilled water that would go to a rack for chip cooling applications. And again, it’s not the whole data center; it’s a small segment of the data center. But it raises questions of how I do that without overkill on the infrastructure needed.


Gardner: Steve, do you expect those small pods of HPC in the data center to make their way out to the edge when people do more data crunching for the low-latency requirements, where you can’t move the data to a data center? Do you expect to have this trend grow more distributed?

Madara: Yes, I expect this will be for more than the enterprise data center and cloud data centers. I think you are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency.

When you think about the autonomous car; none of us know what's going to be required there for that high-performance processing, but I would expect there is going to be a need for that down at the edge.

Gardner: Peter, looking at the power side of things when we look at the batteries that help UPS and systems remain mission-critical regardless of external factors, what’s going on with battery technology? How will we be using batteries differently in the modern data center?

Battery-powered savings 

Panfil: That’s a great question. Battery technology has been evolving at an incredibly fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market batteries that have a size and weight advantage. You can’t put a big, heavy pack of batteries in a car and hope to have it perform well.

It also gives a long-life expectation. So data centers used to have to decide between long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO) has started to become very advantageous for these batteries.

Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL.

https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/proliferation-of-hybrid-computing-models-among-2020-data-center-trends-identified-by-vertiv-experts/

Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about TCO. There are other levers that they can start to play with, too.

So, for example, they have life cycle and operating temperature variables. That used to be a real limitation. Nobody in the data center wanted their systems to go on batteries. They tried everything they could to not have their systems go on the battery because of the potential for shortening the life of their batteries or causing an outage.

Today we are developing IT systems infrastructure that takes advantage of not only LIBs, but also pure lead batteries that can increase the number of [discharge/recharge] cycles. Once you increase the number of cycles, you can think about deploying smart power configurations. That means using batteries not only in the critical infrastructure for a very short period of time when the power grid utility fails, but to use that in critical infrastructure to help offset cost.

If I can reduce utility use at peak demand periods, for example, or I can reduce stress on the grid at specified times, then batteries are not only a reliability play – they are also a revenue-offset play. And so, we’re seeing more folks talking to us about how they can apply these new energy storage technologies to change the way they think about using their critical space.

Also, folks used to think that the longer the battery time, the better off they were because it gave more time to react to issues. Now, folks know what they are doing, they are going with runtimes that are tuned to their operations team’s capabilities. So, if my operations team can do a hot swap over an IT application -- either to a backup critical space application or to a redundant data center -- then all of a sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.

Now, if I can have these battery times tuned to the operations’ capabilities -- and I can use the batteries more often or in higher temperature applications -- then I can really start to impact my TCO and make it very, very cost-effective.

Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can either go to the cloud or the grid, or we can go to on-premises or the battery. Then we can start to mix and match intelligently. That’s really exciting. How does lessening dependence on the grid impact issues such as sustainability and conserving energy?

Sustainability surges forward 

Panfil: We are having such conversations with our key accounts virtually every day. What they are saying is, “I am eventually not going to make smoke and steam. I want to limit the number of times my system goes on a generator. So, I might put in more batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half the amount of the old way, I could potentially put in twice as much, and have the same cost basis and get that economic benefit.”

And so from a sustainability perspective, they are saying, “Okay, I might need at some point in the useful life of that critical space to not draw what I think I need to draw from my utility. I can limit the amount of power I draw from that utility.”
I love all of you out there in data center design, but most of them are designed for peak useage. These changes allow them to design more for the norm of the requirements. That means they can put in less infrastructure, less battery, to right-size their generators; same thing on cooling.

This is not a criticism, I love all of you out there in data center design, but most of them are designed for peak usage. So what these changes allow them to do is to design more for the norm of the requirements. That means they can put in less infrastructure, the potential to put in less battery. They have the potential to right-size their generators; same thing on the cooling side, to right-size the cooling to what they need and not for the extremes of what that data center is going to see.

From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass. Let’s right-size the glass itself, and then all of the other things you have to do in support of that infrastructure are reduced.

Madara: As we look at the edge applications, many will not have backup generators. We will have alternate energy sources, and we will probably be taking more hits to the batteries. Is the LIB the better solution for that?

Panfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability, a path to an energy source that is not fossil fuel-based. That could be a renewable energy source. We might not be able to deploy that today, but they can now deploy what I call foundational technologies that allow them to take advantage of it. If I can have a LIB, for example, that stores excess energy and allows me to absorb energy when I’m creating more than I need -- then I can consume that energy on the other side. It’s better for everybody.

Gardner: We are entering an era where we have the agility to optimize utilization and reduce our total costs. The thing is that it varies from region to region. There are some areas where compliance is a top requirement. There are others where energy issues are a top requirement because of cost.

What’s going on in terms of global cross-pollination? Are we seeing different markets react to their power and thermal needs in different ways? How can we learn from that?

Global differences, normalized 

Madara: If you look at the size of data centers around the world, the data centers in the U.S. are generally much larger than in Europe. And what’s in Europe is much larger than what we have in other developed countries. So, there are a couple of things, as you mentioned, energy availability, cost of energy, the size of the market and the users that it serves. We may be looking at more edge data centers in very underserved markets that have been in underdeveloped countries.

So, you are going to see the size of the data center and the technology used potentially different to better fit needs of the specific markets and applications. Across the globe, certain regions will have different requirements with regard to security and sustainability.

Even though we have these potential differences, we can meet the end-user needs to right-size the IT resources in that region. We are all more common than we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees.

Panfil: There are different regional agency requirements, different governmental regulations that companies have to comply with. And so what we find, Dana, is that what our customers are trying to do is normalize their designs. I won’t say they are standardizing their design because standardization says I am going to deploy exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same globally, they vary by region, the same is true for data centers.

https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/proliferation-of-hybrid-computing-models-among-2020-data-center-trends-identified-by-vertiv-experts/

So, when you look at how the customers are trying to deal with the regional and agency differences that they have to live with, what they find themselves doing is trying to normalize their designs as much as they possibly can globally, realizing that they might not to be able to use exactly the same power configuration or exactly the same thermal configuration. But we also see pockets where different technologies are moving to the forefront. For example, China has data centers that are running at high voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe. Customers are looking at three things -- speed, speed, and speed.

And so when we look at the application, for example, of DC, there used to be a debate, is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus DC deployment. They are doing it so that they can speed their applications deployments.

In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the rack. I look at it very simply. We have been seeing a move from 2N architecture to N plus 1 architecture in the power world for a decade, this is nothing more than adopting the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.

And so what we see is when folks are trying to, number one, increase the speed; number two, increase their utilization; number three, lower their total cost, they are going to deploy infrastructures that are most advantageous for either the IT appliances that they are deploying or for the IT applications that they are running, and it’s not the same for everybody, right Steve?

You and I have been around the planet way too many times, you are a million miler, so am I. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else.

Madara: Correct!

Consistency lowers costs and risks 

Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-value while managing total cost of utilization. What is there about these multiple trends that people can consider when it comes to getting the right balance, the right equilibrium, between TCO and that all important speed-to-value?

Madara: Everybody strives to drive cost down. The more you can drive the cost down of the infrastructure, the more you can do to develop more edge applications.

I think we are seeing a very large rate of change of driving cost down. Yet we still have a lot of stranded capacity out there in the marketplace. And people are making decisions to take that down without impacting risk, but I think they can do it faster.
Standardization helps drive speed, whether it's normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every one is different.

Peter mentioned standardization. Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every new one is different.

Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed. Those are some of the key elements as we go forward.

Gardner: Peter when we look to that standardization, you also allow for more seamless communication from core to cloud to edge. Why is that important, and how can we better add intelligence and seamless communication among and between all these different distributed data centers?

Panfil: When we normalize designs globally, we take a look at the regional differences, sort out what the regional differences have to be, and then put a proof of concept deployment. And out of that comes a consistent method of procedure.

When we talk about managing the data center effectively and efficiently, first of all, you have to know what you have. And second, you have to know what it’s doing. And so, we are seeing more folks normalizing their designs and getting consistency. They can then start looking at how much of their available capacity from a design perspective they are actually using both on a normal basis and on a peak basis and then they can determine how much of that they are willing to use.

We have some customers who are very risk-averse. They stay in the 2N world, which is a 50 percent maximum utilization. We applaud them for it because they are not going to miss a transaction.

There are others who will say, “I can live with the availability that an N+1 architecture gives me. I know I am going to have to be prepared for more failures. I am going to have to figure out how to mitigate those failures.”

So they are working constantly at figuring out how to monitor what they have and figure out what the equipment is doing, and how they can best optimize the performance. We talked earlier about battery runtimes, for example. Sometimes they might get short or sometimes they might be long.

As these companies get into this step and repeat function, they are going to get consistency of their methods of procedure. They’re going to get consistency of how their operations teams run their physical infrastructure. They are going to think about running their equipment in ways that is nontraditional today but will become the norm in the next generation of data centers. And then they are going to look at us and say, “Okay, now that I have normalized my design, can I use rapid deployment configuration? Can I put it on a skid, in a container? Can I drop it in place as the complete data center?”

https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/proliferation-of-hybrid-computing-models-among-2020-data-center-trends-identified-by-vertiv-experts/
Well, we build it one piece of equipment at a time and stitch it all together. The question that you asked about monitoring, it’s interesting because we talked to a major company just last month. Steve and I were visiting them at their site. And they said, “You know what? We spend an awful lot of time figuring out how our building management system and our data exchange happens at the site. Could Vertiv do some of that in the factory? Could you configure our data acquisition systems? Could you test them there in the factory? Could we know that when the stuff shows up on site that it’s doing the things that it’s supposed to be doing instead of us playing hunt and peck to figure out what the issues are?”

We said, “Of course.” So we are adding that capability now into our factory testing environment. What we see is a move up the evolutionary scale. Instead of buying separate boxes, we are seeing them buying solutions -- and those solutions include both monitoring and controls.

Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™ control for thermal. These controls and monitoring systems allow them to increase their utilization rates because they know what they have and what it’s doing.

Gardner: It certainly seems to me, with all that we have said today, that the data center status quo just can’t stand. Change and improvement is inevitable. Let’s close out with your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.

Innovation is inevitable 

Madara: At the end of the day, the IT world is changing rapidly every day. Whether in the cloud or down at the edge, the IT world needs to adjust to those needs. They need to be able to be cut out enough of the cost structure. There is always a demand to drive cost down.

If we don’t change with the world around us, if we don’t meet the requirements of our customers, things aren’t going to work out – and somebody else is going to take it and go for it.

Panfil: Remember, it’s not the big that eats the small, it’s the fast that eats the slow.

Madara: Yes, right.

Panfil: And so, what I have been telling folks is, you got to go. The technology is there. The technology is there for you to cut your cost, improve your speed, and increase utilization. Let’s do it. Otherwise, somebody else is going to do it for you.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring the forces shaping data center decisions and how that’s extending compute resources to new places with the challenging goals of speed, agility, and efficiency.

And we have learned how enterprises and service providers alike are seeking new balance between the need for low latency and optimal utilization of workload placement. So please join me in thanking our guests, Peter Panfil, Vice President of Global Power at Vertiv. Thank you so much, Peter.

Panfil: Thanks for having me. I appreciate it.

Gardner: And we have also been joined by Steve Madara, Vice President of Global Thermal at Vertiv. Thanks so much, Steve.

Madara: You’re welcome, Dana.


Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect data centers strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.

Thanks again for listening. Please pass this along to your community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Friday, November 08, 2019

The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Data Center-as-a-Service

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data center strategies.

Gardner
There has never been a better time to build an efficient, protected, powerful, contained, and modular data center -- yet many enterprises and public sector agencies cling to aging, vulnerable, and chaotic legacy IT infrastructure.

Stay with us now as we examine how automation, self-healing, and increasingly intelligent data center designs and components are delivering what amounts to data centers-as-a-service.

Here to help us learn more about a modern data center strategy that extends to the computing edge -- and beyond -- is Steve Lalla, Executive Vice President of Global Services at Vertiv. Welcome, Steve.


Steve Lalla: Thank you, Dana.

Gardner: Steve, when we look at the evolution of data center infrastructure, monitoring, and management software and services, they have come a long way. What’s driving the need for change now? What’s making new technology more pressing and needed than ever?

Lalla
Lalla: There are a number of trends taking place. The first is the products we are building and the capabilities of those products. They are getting smarter. They are getting more enabled. Moore’s Law continues. What we are able to do with our individual products is improving as we progress as an industry.

The other piece that’s very interesting is it’s not only how the individual products are improving, but how we connect those products together. The connective tissue of the ecosystem and how those products increasingly operate as a subsystem is helping us deliver differentiated capabilities and differentiated performance.

So, data center infrastructure products are becoming smarter and they are becoming more interconnected.

Interconnectivity across ecosystems 

The second piece that’s incredibly important is broader network connectivity -- whether it’s wide area connectivity or local area connectivity. Over time, all of these products need to be more connected, both inside and outside of the ecosystem. That connectivity is going to enable new services and new capabilities that don’t exist today. Connectivity is a second important element.

Third, data is exploding. As these products get smarter, work more holistically together, and are more connected, they provide manufacturers and customers more access to data. That data allows us to move from a break/fix type of environment into a predictive environment. It’s going to allow us to offer more just-in-time and proactive service versus reactive and timed-based services.

And when we look at the ecosystems themselves, we know that over time these centralized data centers -- whether they be enterprise data centers, colocation data centers, or cloud data centers -- are going to be more edge-based and module-based data centers.

And as that occurs, all the things we talked about -- smarter products, more connectivity, data and data enablement -- are going to be more important as those modular data centers become increasingly populated in a distributed way. To manage them, to service them, is going to be increasingly difficult and more important.
A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.

And one final cultural piece is happening. A lot of the folks who interact with these products and services will face what I call knowledge thinning. The highly trained professionals -- especially on the power side of our ecosystem -- that talent is reaching retirement age and there is a high demand for their skills. As data center growth continues to be robust, that knowledge thinning needs to be offset with what I talked about earlier.

So there are a lot of really interesting trends under way right now that impact the industry and are things that we at Vertiv are looking to respond to.

Gardner: Steve, these things when they come together form, in my thinking, a whole greater than the sum of the parts. When you put this together -- the intelligence, efficiency, more automation, the culture of skills -- how does that lead to the notion of data center-as-a-service?

Lalla: As with all things, Dana, one size does not fit all. I’m always cautious about generalizing because our customer base is so diverse. But there is no question that in areas where customers would like us to be operating their products and their equipment instead of doing it themselves, data center-as-a-service reduces the challenges with knowledge thinning and reduces the issue of optimizing products. We have our eyes on all those products on their behalf.

And so, through the connectivity of the product data and the data lakes we are building, we are better at predicting what should be done. Increasingly, our customers can partner with us to deliver a better performing data center.

Gardner: It seems quite compelling. Modernizing data centers means a lot of return on investment (ROI), of doing more with less, and becoming more predictive about understanding requirements and then fulfilling them.

Why are people still stuck? What holds organizations back? I know it will vary from site to site, but why the inertia? Why don’t people run to improve their data centers seeing as they are so integral to every business? 

Adoption takes time

Lalla: Well, these are big, complex pieces of equipment. They are not the kind of equipment that every year you decide to change. One of the key factors that affects the rate at which connectivity, technology, processing capability, and data liberation capability gets adopted is predicated by the speed at which customers are able to change out the equipment that they currently have in their data centers.

Now, I think that we, as a manufacturer, have a responsibility to do what we can to improve those products over time and make new technology solutions backward compatible. That can be through updating communication cards, building adjunct solutions like we do with Liebert® ICOMTM-S and gateways, and figuring out how to take equipment that is going to be there for 15 or 20 years and make it as productive and as modern as you can, given that it’s going to be there for so long.

So number one, the duration of product in the environment is certainly one of the headwinds, if you will.

https://www.vertiv.com/en-us/products-catalog/thermal-management/thermal-control-and-monitoring/liebert-icom-s-thermal-system-supervisory-control2/

Another is the concept of connectivity. And again, different customers have different comfort levels with connectivity inside and outside of the firewall. Clearly the more connected we can be with the equipment, the more we can update the equipment and assess its performance. Importantly, we can assess that performance against a big data lake of other products operating in an ecosystem. So, I think connectivity, and having the right solutions to provide for great connectivity, is important.

And there are cultural elements to our business in that, “Hey, if it works, why change it, right?” If it’s performing the way you need it to perform and it’s delivering on the power and cooling needs of the business, why make a change? Again, it’s our responsibility to work with our customers to help them best understand that when new technology gets added -- when new cards get added and when new assistants, l call them digital assistants, get added -- that that technology will have a differential effect on the business.

So I think there is a bit of reality that gets in the way of that sometimes.

Gardner: I suppose it’s imperative for organizations like Vertiv to help organizations move over that hump to get to the higher-level solutions and overcome the obstacles because there are significant payoffs. It also sets them up to be much more able to adapt to the future when it comes to edge computing, which you mentioned, and also being a data-driven organization.

How is Vertiv differentiating yourselves in the industry? How does combining services and products amount to a solution approach that helps organizations modernize.

Three steps that make a difference

Lalla: I think we have a differentiated perspective on this. When we think about service, and we think about technology and product, we don’t think about them as separate. We think about them altogether. My responsibility is to combine those software and service ecosystems into something more efficient that helps our customers have more uptime, and it becomes more predictive versus break/fix to just-in-time-types of services.
We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.

And the way we do that is through three steps. Number one, we have to continue to work closely with our product teams to ensure early in the product definition cycle which products need to be interconnected into an as-a-service or a self-service ecosystem.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what, in fact, we can do once data and information gets liberated. A great strategy always starts with great product, and that’s core to our solution.

The next step is a clear understanding that some of our customers want to service equipment themselves. But many of our customers want us to do that for them, whether it’s physically servicing equipment or monitoring and managing the equipment remotely, such as with our LIFETM management solution.

We are increasingly looking at that as a continuum. Where does self-service end, and where do delivered services begin? In the past it’s been relatively different in what we do -- from a self-service and delivered service perspective. But increasingly, you see those being blended together because customers want a seamless handover. When they discover something needs to be done, we at Vertiv can pick up from there and perform that service.

So the connective tissue between self-service and Vertiv-delivered service is something that we are increasing clarity on.

And then finally, we talked about this earlier, we are being very active at building a data lake that comes from all the ecosystems I just talked about. We have billions of rows of normalized data in our data lake to benefit our customers as we speak.

Gardner: Steve, when you service a data center at that solution-level through an ecosystem of players, it reminds me of when IT organizations started to manage their personal computers (PCs) remotely. They didn’t have to be on-site. You could bring the best minds and the best solutions to bear on a problem regardless of where the problem was -- and regardless of where the expertise was. Is that what we are seeing at the data center level?

Self-awareness remotely and in-person

Lalla: Let’s be super clear, to upgrade the software on an uninterruptible power supply (UPS) is a lot harder than to upgrade software on a PC. But the analogy of understanding what must be done in-person and what can be done remotely is a good one. And you are correct. Over years and years of improvement in the IT ecosystems, we went from a very much in-person type of experience, fixing PCs, to one where very much like mobile phones, they are self-aware and self-healing.

This is why I talked about the connectivity imperative earlier, because if they are not connected then they are not aware. And if they are not aware, they don’t know what they need to do. And so connectivity is a super important trend. It will allow us to do more things remotely versus always having to do things in-person, which will reduce the amount of interference we, as a provider of services, have on our customers. It will allow them to have better uptime, better ongoing performance, and even over time allow tuning of their equipment.
You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.

We are at the early stages of that journey. You could argue the mobile phone and the PC guys are at the very late stages of their journey of automation. We are in the very early stages of it, but the things we talked around earlier -- smarter products, connectivity, and data -- all are important factors influencing that.

Gardner: Another evolution in all of this is that there is more standardization, even at the data center level. We saw standardization as a necessary step at the server and storage level -- when things became too chaotic, too complex. We saw standardization as a result of virtualization as well. Is there a standardization taking place within the ecosystem and at that infrastructure foundation of data centers?

Standards and special sauce

Lalla: There has been a level of standardization in what I call the self-service layer, with protocols like BACnet, Modbus, and SNMP. Those at least allow a monitoring system to ingest information and data from a variety of diverse devices for minimally being able to monitor how that equipment is performing.

I don’t disagree that there is an opportunity for even more standardization, because that will make that whole self-service, delivered-as-a-service ecosystem more efficient. But what we see in that control plane is really Vertiv’s unique special sauce. We are able to do things between our products with solutions – like Liebert ICOM-S -- that allow our thermal products to work better together than if they were operating independently.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

You are going to see an evolution of continued innovation in peer-to-peer networking in the control plane that probably will not be open and standard. But it will provide advances in how our products work together. You will see in that self-service, as-a-service, and delivered-service plane continued support for open standards and protocols so that we can manage more than just our own equipment. Then our customers can manage and monitor more of their own equipment.

And this special sauce, which includes the data lakes and algorithms -- a lot of intellectual property and capital in building those algorithms and those outcomes -- help customers operate better. We will probably stay close to the vest in the short term, and then we’ll see where it goes over time.

Gardner: You earlier mentioned moving data centers to the edge. We are hearing an awful lot architecturally about the rationale for not moving the edge data to the cloud or the data center, but instead moving the computational capabilities right out to the edge where that data is. The edge is where the data streams in, in massive quantities, and needs to be analyzed in real-time. That used to be the domain of the operational technology (OT) people.

As we think about data centers moving out to the edge, it seems like there’s a bit of an encroachment or even a cultural clash between the IT way of doing things and the OT way of doing things. How does Vertiv fit into that, and how does making data center-as-a-service help bring the OT and IT together -- to create a whole greater than the sum of the parts?

OT and IT better together 

Lalla: I think maybe there was a clash. But with modular data centers and things like SmartAisle and SmartRow that we do today, they could be fully contained, standalone systems. Increasingly, we are working with strategic IT partners on understanding how that ecosystem has to work as a complete solution -- not with power and cooling separate from IT performance, but how can we take the best of the OT world power and cooling and the best of the IT world and combine that with things like alarms and fire suppression. We can build a remote management and monitoring solution that can be outsourced if you wanted, to consume it as a service, or in-sourced if you want to do it yourself.


And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.

Gardner: I was thinking that the OT-IT thing was just an issue at the edge. But it sounds like you’re also referring to it within the data center itself. So flesh that out a bit. How do OT and IT together -- managing all the IT systems, components, complexity, infrastructure, support elements -- work in the intelligent, data center-as-a-service approach?

Lalla: There is the data center infrastructure management (DCIM) approach, which says, “Let’s bring it all together and manage it.” I think that’s one way of thinking about OT and IT, and certainly Vertiv has solutions in that space with products like TrellisTM.

But I actually think about it as: Once the data is liberated, how do we take the best of computing solutions, data analytics solutions, and stuff that was born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in our industrial OT space?

It’s not necessarily that OT and IT are one thing, but how do we apply the best of all of technology solutions? Things like security. There is a lot of great stuff that’s emerged for security. How do we take a security-solutions perspective in the IT space if we are going to get more connected in the OT space? Well, let’s learn from what’s going on in IT and see how we can apply it to OT.
Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.

Just because DCIM has been tackled for years doesn’t mean we can’t take more of the best of each world and see how you can put those together to provide a solution that’s differentiated.

I go back to the Liebert ICOM-S solution, which uses desktop computing and gateway technology, and application development running on a high-performance IT piece of gear, connected to OT gear to get those products that normally would work separately to actually work more seamlessly together. That provides better performance and efficiency than if those products operated separately.

Liebert ICOM-S is a great example of where we have taken the best of the IT world compute technology connectivity and the best of the OT world power and cooling and built a solution that makes the interaction differentiated in the marketplace.

Gardner: I’m glad you raised an example because we have been talking at an abstract level of solutions. Do you have any other use cases or concrete examples where your concept for infrastructure data center-as-a-service brings benefits? When the rubber hits the road, what do you get? Are there some use cases that illustrate that? 

Real LIFE solutions

Lalla: I don’t have to point much further than our Vertiv LIFE Services remote monitoring solution. This solution came out a couple years ago, partly from our Chloride® Group acquisition many years ago. LIFE Services allows customers to subscribe to have us do the remote monitoring, remote management, and analytics of what’s happening -- and whenever possible do the preventative care of their networks.

And so, LIFE is a great example of a solution with connectivity, with the right data flowing from the products, and with the right IT gear so our personnel take the workload away from the customer and allow us to deliver a solution. That’s one example of where we are delivering as-a-service for our customers.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/
We are also working with customers -- and we can’t expose who they are -- to bring their data into our large data lake so we can help them better predict how various elements of their ecosystem will perform. This helps them better understand when they need just-in-time service and maintenance versus break/fix service and maintenance.

These are two different examples where Vertiv provides services back to our customers. One is running a network operations center (NOC) on their behalf. Another uses the data lake that we’ve assimilated from billions of records to help customers who want to predict things and use the broad knowledge set to do that.

Gardner: We began our conversation with all the great things going on in modern data center infrastructure and solutions to overcome obstacles to get there, but economics plays a big role, too. It’s always important to be able to go to the top echelon of your company and say, “Here is the math, here’s why we think doing data center modernization is worth the investment.”

What is there about creating that data lake, the intellectual property, and the insights that help with data center economics? What’s the total cost of ownership (TCO) impact? How do you know when you’re doing this right, in terms of dollars and cents?

Uptime is money

Lalla: It’s difficult to generalize too much but let me give you some metrics we care about. Stuff is going to break, but if we know when it’s going to break -- or even if it does break -- we can understand exactly what happened. Then we can have a much higher first-time fix rate. What does that mean? That means I don’t have to come out twice, I don’t have to take the system out of commission more than once, and we can have better uptime. So that’s one.

Number two, by getting the data we can understand what’s going on with the network time-to-repair and how long it takes us from when we get on-site to when we can fix something. Certainly it’s better if you do it the first time, and it’s also better if you know exactly what you need when you’re there to perform the service exactly the way it needs to be done. Then you can get in and out with minimal disruption.

A third one that’s important -- and one that I think will grow in importance -- is we’re beginning to measure what we call service avoidance. The way we measure service avoidance is we call up a customer and say, “Hey, you know, based on all this information, based on these predictions, based on what we see from your network or your systems, we think these four things need to be addressed in the next 30 days. If not, our data tells us that we will be coming out there to fix something that broken as opposed to fixing it before it breaks.” So service avoidance or service simplification is another area that we’re looking at.

There are many more -- I mean, meeting service level agreements (SLAs), uptime, and all of those -- but when it comes to the tactical benefits of having smarter products, of being more connected, liberating data, and consuming that data and using it to make better decisions as a service -- those are the things that customers should expect differently.

Gardner: And in order to enjoy those economic benefits through the Vertiv approach and through data center-as-a-service, does this scale down and up? It certainly makes sense for the larger data center installations, but what about a small- to medium-sized business (SMB)? What about a remote office, or a closet and a couple of racks? Does that make sense, too? Do the economic and the productivity benefits scale down as well scale up?

Lalla: Actually when we look at our data, more customers who don’t have all the expertise to manage and monitor their single-phase or small three-phase or Liebert CRV [cooling] units, and they don’t have the skill set -- those are the customers that really appreciate what we can do to help them. It doesn’t mean that they don’t appreciate it as you go up the stack, because as you go up the stack what those customers appreciate isn’t the fact that they can do some of the services themselves. They may be more of a self-service-oriented customer, but what they increasingly are interested in is how we’re using data in our data lake to better predict things that they can’t predict by just looking at their own stuff.

https://www.vertiv.com/
So, the value shifts depending on where you are in the stack of complexity, maturity, and competency. It also varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale. There are a number of variables so that’s why it’s difficult to generalize. But this is why the themes of productivity, smarter products, edge ecosystems, and data liberation are common across all those segments. How they apply the value that’s extracted in each segment can be slightly different.

Gardner: Suffice it to say data center-as-a-service is highly customizable to whatever organization you are and wherever you are on that value chain.

Lalla: That’s absolutely right. Not everybody needs everything. Self-service is on one side and as-a-service is on the other. But it’s not a binary conversation.

Customers who want to do most of the stuff themselves with technology, they may need only a little information or help from Vertiv. Customers who want most of their stuff to be managed by us -- whether it’s storage systems or large systems -- we have the capability of providing that as well. This is a continuum, not an either-or.

Gardner: Steve, before we close out, let’s take a look to the future. As you build data lakes and get more data, machine learning (ML) and artificial intelligence (AI) are right around the corner. They allow you to have better prediction capabilities, do things that you just simply couldn’t have ever done in the past.

So what happens as these products get smarter, as we are collecting and analyzing that data with more powerful tools? What do you expect in the next several years when it comes to the smarter data center-as-a-service?

Circle of knowledge gets smart 

Lalla: We are in the early stages, but it’s a great question, Dana. There are two outcomes that will benefit all of us. One, that information with the right algorithms, analysis, and information is going to allow us to build products that are increasingly smarter.

There is a circle of knowledge. Products produce information going to the data lake, we run the right algorithms, look for the right pieces of information, feed that back into our products, and continually evolve the capability of our products as time goes on. Those products will break less, need less service, and are more reliable. We should just expect that, just as you have seen in other industries. So that’s number one.

Number two, my hope and belief are that we move from a break-fix mentality or environment of where we wait for something to show up on a screen as an alarm or an alert. We move from that to being highly predictive and just-in-time.

As an industry -- and certainly at Vertiv -- first-time fix, service avoidance, and time for repair are all going to get much better, which means one simple thing for our customers. They are going to have more efficient and well-tuned data centers. They are going to be able to operate with higher rates of uptime. All of those things are going to result in goodness for them -- and for us.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how automation, self-healing, and increasingly intelligent data center designs are delivering what amounts to data centers-as-a-service. And we’ve learned how modern data center strategies will extend to the computing edge and beyond.

So please join me in thanking our guest, Steve Lalla, Executive Vice-President of Global Services at Vertiv. Thank you so much, Steve.


Lalla: Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect data center strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.

Thanks again for listening. Please pass this along to your community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: