Dana Gardner: Hello,
and welcome to the next edition of the BriefingsDirect
Voice of the Customer podcast series. I’m Dana Gardner, Principal
Analyst at Interarbor Solutions,
your host and moderator for this ongoing discussion on digital transformation success
stories.
Our next intelligent storage innovation
discussion explores how Norway-based Intility
sought and found the cutting edge of intelligent
storage. Stay with us now as we learn how this leading managed platform services
provider improved uptime and reduced complexity for its end users.
To hear more about the latest
in intelligent storage strategies that lead to better business outcomes, please
join me in welcoming Knut Erik
Raanæs, Chief Infrastructure Officer at Intility in Oslo, Norway. Welcome,
Knut.
Knut Erik Raanæs: Thank
you, Dana. Thanks for having me.
Gardner: Knut,
what trends and business requirements have been driving your need for Intility
to be an early adopter of intelligent
storage technology?
Raanæs: For
us, it is important to have good storage systems that are easy to operate to
lower our management costs. At the same time, it gives great uptime for our customers.
Gardner: You
are dealing not only with quality of service requirements; you also have very
rapid growth. How does intelligent storage help you manage such rapid growth?
Raanæs: By
easily having performance trends shown so we can spot when we are running full.
If that happens, we can react before we run out of capacity.
Gardner: As a managed cloud service
provider, it’s important for you to have strict service level agreements (SLAs)
met. Why are the requirements of cloud services particularly important when it
comes to the quality of storage services?
Raanæs: It’s
very important to have good quality of service separation because we have lots
of different kinds of customers. We don’t want to have the noise-enabled
problem where one customer affects another customer -- or even the virtual
machine (VM) of one customer affects another VM. The applications should work
independently of each other.
Gardner: Tell us
about Intility, your size, scope, how long you have been around, and some of
the major services you provide.
Raanæs:
Intility was founded in 2000. We have always been focused on being a managed
cloud service provider. From the start, there have been central shared services,
a central platform, where we on-boarded customers and they shared email systems,
and Microsoft
Active Directory, along with all the application backup systems.
Over the last few years, the public
cloud has made our customers more open to cloud solutions in general, and to not
having servers in the local on-premises room at the office. We have now grown
to more than 35,000 users, spread over 2,000 locations across 43 countries. We
have 11 shared services datacenters, and we also have customers with edge
location deployments due to high latency or unstable Internet connections. They
need to have the data close to them.
Gardner: What is
required when it comes to solving those edge storage needs?
Customers often want inexpensive solutions. We have to look at different solutions that give the best stability but don't cost too much. And we need remote management of the solution.
Raanæs: Those
customers often want inexpensive solutions. So we have to look at different
solutions and pick the one that gives the best stability but that also doesn’t
cost too much. We also need easy remote management of the solution, without
being physically present.
Gardner: At Intility,
even though you’re providing infrastructure-as-a-services (IaaS), you are also providing
a digital transformation benefit. You’re helping your customers mature and better
manage their complexity as well as difficulty in finding skills. How does
intelligent IaaS translate into digital transformation?
Raanæs: When
we meet with potential customers, we focus on taking away concerns about infrastructure.
They are just going to leave that part to us. The IT people can then just move
up in [creating value] and focus on digitalizing the business for their customers.
Gardner: Of
course, cloud-based services require overcoming challenges with security, integration,
user access management, and single sign on. How are those higher-level services
impacted by the need for intelligent storage?
Smart storage security
Raanæs: With
intelligent storage, we can focus on having our security operations
center (SOC) monitor responses the instant they see them on our platforms.
We can keep a keen eye on our storage systems to make sure that nothing ever happens
on the storage. That can be an early signal of something happening.
Gardner: Please
describe the journey you have been on when it comes to storage. What systems
you have been using? Why have intelligence, insights, and analysis capabilities
been part of your adoption?
Raanæs: We
started back in 2013 with HPE 3PAR arrays.
Before that we used IBM storage. We had multiple single-Redundant Array of
Inexpensive Disks (RAID) sets and had to manage hotspots ourselves, so by moving
even one VM we had to try and balance it out manually.
In 2013, when we went with the
first 3PAR array, we had huge benefits. That 3PAR array used less space and at the
same time we didn’t have to manage or even out the hotspots. 3PAR and its
active controllers were a great plus for us for many years.
But about one-and-a-half years
ago, we started using HPE Nimble arrays, primarily due to the needs of VMware vCenter and quality of
service requirements. Also, with the Nimble arrays, the InfoSight technology
was quite nice.
Raanæs: It’s
been quite useful. We had some systems that required us to use other third-party
applications to give an expansive view of the performance of the environment. But
those applications were quite expensive and had functionality that we really
didn’t need. So at first we pulled data from the vCenter database and visualized
the data. That was a huge start for us. But when InfoSight came along later it gave
us even more information about the environment.
Gardner: I
understand you are now also a beta customer for HPE Primera storage.
Tell us about your experience with Primera. How does that move the needle
forward for you?
For 100 percent uptime
Raanæs: Yes,
we have been beta
testing Primera, and it has been quite interesting. It was easy to set up.
I think maybe 20 minutes from getting it into the rack and just clicking
through the setup. It was then operational and we could start provisioning
storage to the whole system.
And with Primera, HPE is going
in with 100 percent uptime guarantee. Of course, I still expect to deal with
some rare incidences or outages, but it’s nice to see a company that’s willing
to put their money where their mouth is, and say, “Okay, if there is any
downtime or an outage happens, we are going to give you something back for it.”
Gardner: Do
you expect to put HPE Primera into production soon? How would you use it first?
With Primera, HPE is going in with 100 percent uptime guarantee. It's nice to see a company that's willing to put their money where their mouth is.
Raanæs: So we
are currently waiting for our next software upgrade for HPE Primera. Then we
are then going to look at putting it into production. The use case is going to
be general storage because we have so much more storage demand and need to try
to keep it consistent, to make it easier to manage.
Gardner: And
do you expect to be able to pass along these benefits of speed of deployment
and 100 percent uptime to your end users? How do you think this will improve
your ability to deliver SLAs and better business outcomes?
Raanæs: Yes, our
end users are going to be quite happy with 100 percent uptime. No one likes
downtime -- not us, not our customers. And HPE Primera’s speed of deployment
means that we have more time to manage other parts of the platform and to get
better service out to the customers.
Gardner: I
know it’s still early and you are still in the proof of concept stage, but how
about the economics? Do you expect that having such high levels of advanced
intelligence across storage will translate into your ability to do more for
less, and perhaps pass some of those savings on?
Raanæs: Yes,
I expect that’s going to be quite beneficial for us. Because we are based in
Norway, one of our largest expenses is for people. So, the more we can automate
by using the systems, the better. I am really looking forward to seeing this improve
and getting easier to manage systems and analyze performance within a few
hours.
Gardner: On
that issue of management, have you been able to use HPE Primera to the degree
where you have been able to evaluate its ease of management? How beneficial is
that?
Work smarter, not harder
Raanæs: Yes,
the ease of management was quite nice. With Primera you can do the service
upgrade more easily. So with 3PAR, we had to schedule an upgrade with the
upgrade team at HPE and had to wait a few weeks. Now we can just do the upgrade
ourselves.
And hardware replacements are
easier, too. We can just get a nice PDF showing you how to replace the parts.
So it’s also quite nice.
I also like the part of the service
processor in 3PAR that’s now just garnered with Primera; it’s in with the
array. So, that’s one less thing to worry about managing.
Gardner: Knut,
as we look to the future, other technologies are evolving across the
infrastructure scene. When combined with something like HPE Primera, is there a
whole greater than the sum of the parts? How will you will be able to use more
intelligence broadly and leverage more of this opportunity for simplicity and passing
that onto your end users?
Raanæs: I’m
hoping that more will come in the future. We are also looking at non-volatile memory express (NVMe).
That’s a caching solution and it’s ready to be built into HPE Primera, too. So
that’s also quite interesting to see what the future will bring there.
Gardner: I’m
afraid we will have to leave it there. We have been discussing how Norway-based
Intility sought and found the cutting-edge of intelligent storage. And we have
learned how this leading managed platform service provider improved uptime and
reduced complexity for its end users.
So please join me now in
thanking our guest, Knut Erik Raanæs, Chief Infrastructure Officer at Intility
in Oslo. Thank you so much, Knut.
Raanæs: Thanks
for having me.
Gardner: And a
big thank you as well to our audience for joining us for this BriefingsDirect
Voice of the Customer digital transformation success story discussion. I’m Dana
Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing
series of Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening.
Please pass this on to your community, and do come back next time.
A discussion on how Norway-based Intility sought and found the
cutting edge of intelligent storage. Copyright Interarbor Solutions, LLC,
2005-2020. All rights reserved.
A discussion with two leading IT and critical infrastructure
executives on how the state of data centers in
2020 demands better speed, agility, and efficiency from IT resources wherever
they reside.
Dana Gardner: Hello,
and welcome to the next edition of the BriefingsDirect
podcast series. I’m Dana
Gardner, Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on the
latest insights into data center strategies.
As 2020 ushers in a new decade,
the forces shaping data center decisions are extending compute resources to new places. With the challenging goals of speed, agility, and efficiency,
enterprises and service providers alike will be seeking new balance between the
need for low latency and optimal utilization of workload placement.
Hybrid models will therefore
include more distributed, confined, and modular data centers at or near the
edge.
These are but some of a few
top-line predictions on the future state of the modern data center design. Stay
with us as we examine, with two leading IT and critical infrastructure
executives, how these data center variations nonetheless must also interoperate
seamlessly from core to cloud to edge.
Here to help us learn more
about the state of data centers in 2020 is Peter Panfil,
Vice President of Global Power at VertivTM.
Welcome, Peter.
Peter Panfil: How are
you, Dana?
Gardner: I’m
doing great. We’re also here with Steve Madara, Vice
President of Global Thermal at Vertiv. Welcome, Steve.
Steve Madara: Thank
you, Dana.
Gardner: The
world is rapidly changing in 2020. Organizations are moving past the debate
around hybrid deployments, from on-premises to public clouds. Why do we need to
also think about IT architectures and hybrid computing differently, Peter?
Panfil: We
noticed a trend at Vertiv in our customer base. That trend is toward a new
generation of data centers. We have been living with distributed IT, client-server
data centers moving to cloud, either a public cloud or a private cloud.
But what we are seeing is the
evolution of an edge-to-core, near-real-time data center generation. And it’s
being driven by devices everywhere, the “connected-all-the-time” model that all
of us seem to be going to.
And so, when you are in a near-real-time
world, you have to have infrastructure that supports your near-real-time
applications. And that is what the technology folks are facing. I refer to it as
a pack of dogs chasing them -- the amount of data that’s being generated, the
applications running remotely, and the demand for availability, low latency, and
driving cost down as much as you possibly can. This is what’s changing how they
approach their critical infrastructure space.
Gardner: And
so, a new equilibrium is emerging. How is this different from the past?
Madara: If we
go back 20 years, everything was centralized at enterprise data centers. Then
we decided to move to decentralized, and then back to centralized. We saw a
move to colocation as people decided that’s where they could get lower cost to
run their apps. And then things went to the cloud, as Peter said earlier.
And now, we have a huge number
of devices connected locally. Cisco says by late 2020 that it’s going to have 23
billion connected devices, and over half of those are going to be machine-to-machine
communications, which, as Peter mentioned earlier, the latency is going to be
very, very critical.
An interesting read is Michael
Lewis’s book Flash Boys
about the arbitrage that’s taking place with the low latency that you have in
stock market trading. I think we are going to see more of that moving to the
edge. The edge is more like a smart rack or smart row deployment in an existing
facility. It’s going to be multi-tenant, because it’s going to be able to be throughout
large cities. There could be 20 or 30 of these edge data center sites hosting different
applications for customers.
This move to the edge is also
going to provide IT resources in a lot of underserved markets that don’t yet have
pervasive compute, especially in emerging countries.
Gardner: Why is
speed so important? We have been talking about this now for years, but it seems
like the need for speed to market and speed to value continues to ramp up. What’s
driving that?
Panfil: There
is more than one kind of speed. There is speed of response of the application, that’s
something that all of us demand -- speed of response of the applications. I
have to have low latency in the transactions I am performing with my data or with
my applications. So there is the speed of the actual data being transmitted.
There is also speed of
deployment. When Steve talked earlier about centralized cloud deployments in these
core data centers, your data might be going over a significant distance,
hopping along the way. Well, if you can’t live with that latency that gets inserted,
then you have to take the IT application and put it closer to the source and
consumer of the data. So there is a speed of deployment, from core to edge, that
happens.
And the third type of speed is
you have to have low-first-cost, high-asset-utilization, and rapid-scalability.
So that’s a speed of infrastructure adaptation to what the demands for the IT applications
are.
So when we mean speed, I often say it's speed, speed, and speed. First it's the data speed, then deploying fast, and then at scale at business-friendly cost and reliability.
So when we mean speed,
I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have
data IT speed, how did I achieve that? l did it by deploying fast, in the scale
needed for the applications, and lastly at a cost and reliability that makes it
tolerable for the businesses.
Gardner: So I
guess it’s speed-cubed, right?
Panfil: At
least, speed-cubed. Steve, if we had a nickel for every time one of our
customers said “speed,” we wouldn’t have to work anymore. They are consumed
with the different speeds that they have to deal with -- and it’s really the demands
of their customers.
Steve, Vertiv predicts that “hybrid
architectures will go mainstream.” Why did you identify that, and what do you
mean?
The future goes hybrid
Madara: If we
look at the history of going from centralized to decentralized, and going to
colocation and cloud applications, it shows the ongoing evolution of Internet
of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more
and more of that data is generated and will need to be processed locally. A lot
of that is from machine-to-machine applications.
So when we now talk about hybrid,
we have to get very, very close to the source, as far as the processing is
concerned. That’s going to be a large-scale evolution that’s going to drive the
need for hybrid applications. There is going to be processing at the edge as
well as centralized applications -- whether it’s in a cloud or hosted in colocation-based
applications.
Panfil: Steve,
you and I both came up through the ranks. I remember when the data closet down
the hall was basically a communications matrix. Its intent was to get
communications from wherever we were to wherever our core data center was.
Well, the cloud is not going
away. Number two, enterprise IT is not going away. What the enterprise is saying
is, “Okay, I am going to take my secret sauce and I am going to put it in an
edge data center. I am going to put the compute power as close to my consumer of
that data and that application as I possibly can. And then I am going to figure
out where the rest of it’s going to go.”
If
I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can't, I might even break up my enterprise data center into small or micro data centers that give me even better responses.
“If I can live with the
latency I get out of a core data center, I am going to stay in the cloud. If I
can’t, I might even break up my enterprise data center into small or micro data
centers that give me even better responses.”
Dana, it’s interesting, there
was a recent wholesale market summary published that said the difference
between the smaller and the larger wholesale deals widened. So what that says
is the large wholesale deals are getting bigger, the small wholesale deals are
getting smaller, and that the enterprise-based demand, in deployments under 600
kilowatts, is focused on low-latency and multi-cloud access.
That tells us that our
customers, the users of that critical space, are trying to place their IT
appliances as close as they can to their customers, eliminating the latency,
responding with speed, and then figuring out how to mesh that edge deployment
with their core strategy.
Gardner: Our
second trend gets back to the speed-cubed notion. I have heard people describe
this as a new arms race, because while it might be difficult to
differentiate yourself when everyone is using the same public cloud services,
you can really differentiate yourself on how well you can conduct yourself at speed.
What kinds of capabilities
across your technologies will make differentiation around speed work to an advantage
as a company?
The need for speed
Panfil: Well,
I was with an analyst recently, and I said the new reality is not that the big
will eat the small -- it’s that the fast will eat the slow. And any advantage
that you can get in speed of applications, speed of deployment, deploying those
IT assets -- or morphing the data center infrastructure or critical space
infrastructure – helps improve capital efficiency. What many customers tell us
is that they have to shorten the period of time between deciding to spend money
on IT assets and the time that those asset start creating revenue.
They want help being creative in
lowering their first-cost, in increasing asset utilization, and in maintaining reliability.
If, holy cow, my application goes down, I am out of business. And then they
want to figure out how to manage things like supply chains and forecasting,
which is difficult to do in this market, and to help them be as responsive as they
can to their customers.
Madara: Forecasting
and understanding the new applications -- whether it’s artificial intelligence
(AI) or 5G -- the CIOs need to decide where they need to put those applications
whether they should be in the cloud or at the edge. Technology is changing so
fast that nobody can predict far out into the future as far as to where I will need
that capacity and what type of capacity I will need.
So, it comes down to being
able to put that capacity in the place where I need it, right when I need it, and
not too far in advance. Again, I don’t want to spend the capital, because I may
put it in the wrong place. So it’s got to be about tying the demand with the
supply, and that’s what’s key as far as the infrastructure.
And the other element I see is
technology is changing fast, even on the infrastructure side. For our
equipment, we are constantly making improvements every day, making it more
efficient, lower cost, and with more capability. And if you put capacity in
today that you don’t need for a year or two down the road, you are not taking
advantage of the latest, greatest technology. So really it’s coupling the
demand to the actual supply of the infrastructure -- and that’s what’s key.
Another consideration is that many
of these large companies, especially in the colocation market, have their
financial structure as a real estate investment
trust (REIT). As a result, they need to tie revenue with expenses tighter
and tighter, along with capital spending.
Panfil: That’s
a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically
to be able to address this demand.
In previous generations, for
example, the uninterruptible power supply (UPS) was built as a complete UPS. The
new generation is built as a power converter, plus an I/O section, plus an
interface section that can be rapidly configured to the customer, or, in some
cases, put into a vendor-managed inventory program. This approach allows us to respond
to the market and customers quicker.
We were forced to change our
business model in such a way that we can respond in real time to these kinds of
capacity-demand changes.
Madara: And
to add to that, we have to put together more and more modules and solutions
where we are bundling the equipment to deliver it faster, so that you don’t have
to do testing on site or assembly on site. Again, we are putting together solutions
that help the end-user address the speed of the construction of the
infrastructure.
I also think that this ties into
the relationship that the person who owns the infrastructure has with their
supplier base. Those relationships have to build in, as Peter mentioned
earlier, the ability to do stocking of inventory, of having parts available on-site
to go fast.
Gardner: In
summary so far, we have this need for speed across multiple dimensions. We are
looking at more hybrid architectures, up and down the scale -- from edge to
core, on-premises to the cloud. And we are also looking at crunching more data
and making real-time analytics part of that speed advantage. That means being
able to have intelligence brought to bear on our business decisions and making that
as fast as possible.
So what’s going on now with the
analytics efficiency trend? Even if average rack density remains static due to
a lack of space, how will such IT developments as high performance computing (HPC)
help make this analysis equation work to the business outcome’s advantage?
High-performance, high-density
pods
Madara: The
development of AI applications, machine learning (ML), and what could be called
deep learning are evolving. Many applications are requiring these HPC systems.
We see this in the areas of defense, gaming, the banking industry, and people
doing advanced analytics and tying it to a lot of the sensor data we talked
about for manufacturing.
It’s not yet widespread, it’s not
across the whole enterprise or the entire data center, and these are often unique
applications. What I hear in large data centers, especially from the banks, is
that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW
racks -- but they only have three or four of these racks in the whole data
center.
The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of
high-density compute. They are going to need to decide how they are going to facilitize for that type of equipment.
The end-user will need to
decide how to tune or adjust facilities to accommodate these small but growing pods
of high-density compute. And if they are in their own facility, if it’s an
enterprise that has its own data center, they will need to decide how they are
going to facilitize for that type of equipment.
A lot of the colocation hosting
facilities have customers saying, “Hey, I am going to be bringing in the future
a couple of racks that are very high density. A lot of these multi-tenant data
centers are saying, ‘Oh, how do I provision for these, because my data center
was laid out for this average of maybe 8 kW per rack? How do I manage that,
especially for data centers that didn’t previously have chilled water to
provide liquid to the rack?’”
We are now seeing a need to
provide chilled water cooling that would go to a rear door heat exchanger on
the back of the rack. It could be chilled water that would go to a rack for
chip cooling applications. And again, it’s not the whole data center; it’s a
small segment of the data center. But it raises questions of how I do that
without overkill on the infrastructure needed.
Gardner: Steve,
do you expect those small pods of HPC in the data center to make their way out to
the edge when people do more data crunching for the low-latency requirements, where
you can’t move the data to a data center? Do you expect to have this trend grow
more distributed?
Madara: Yes, I
expect this will be for more than the enterprise data center and cloud data
centers. I think you are going to see analytics applications developed that are
going to be out at the edge because of the requirements for latency.
When you think about the
autonomous car; none of us know what's going to be required there for that
high-performance processing, but I would expect there is going to be a need for
that down at the edge.
Gardner: Peter,
looking at the power side of things when we look at the batteries that help UPS
and systems remain mission-critical regardless of external factors, what’s
going on with battery technology? How will we be using batteries differently in
the modern data center?
Battery-powered savings
Panfil: That’s
a great question. Battery technology has been evolving at an incredibly fast
rate. It’s being driven by the electric vehicles. That growth is bringing to
the market batteries that have a size and weight advantage. You can’t put a big,
heavy pack of batteries in a car and hope to have it perform well.
Our sales leadership lead sent
me the most recent TCO between either TPPL or LIBs versus traditional VRLA
batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some
cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL.
Where in the cloud generation of
data centers was all about lowest first cost, in this edge-to-core mentality of
data centers, it’s about TCO. There are other levers that they can start to
play with, too.
So, for example, they have
life cycle and operating temperature variables. That used to be a real limitation.
Nobody in the data center wanted their systems to go on batteries. They tried
everything they could to not have their systems go on the battery because of
the potential for shortening the life of their batteries or causing an outage.
Today we are developing IT
systems infrastructure that takes advantage of not only LIBs, but also pure
lead batteries that can increase the number of [discharge/recharge] cycles. Once
you increase the number of cycles, you can think about deploying smart power
configurations. That means using batteries not only in the critical
infrastructure for a very short period of time when the power grid utility
fails, but to use that in critical infrastructure to help offset cost.
If I can reduce utility use at
peak demand periods, for example, or I can reduce stress on the grid at
specified times, then batteries are not only a reliability play – they are also
a revenue-offset play. And so, we’re seeing more folks talking to us about how
they can apply these new energy storage technologies to change the way they
think about using their critical space.
Also, folks used to think that
the longer the battery time, the better off they were because it gave more time
to react to issues. Now, folks know what they are doing, they are going with
runtimes that are tuned to their operations team’s capabilities. So, if my
operations team can do a hot swap over an IT application -- either to a backup
critical space application or to a redundant data center -- then all of a
sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I
might only need 60 to 120 seconds.
Now, if I can have these
battery times tuned to the operations’ capabilities -- and I can use the
batteries more often or in higher temperature applications -- then I can really
start to impact my TCO and make it very, very cost-effective.
Gardner: It’s
interesting; there is almost a power analog to hybrid computing. We can either
go to the cloud or the grid, or we can go to on-premises or the battery. Then
we can start to mix and match intelligently. That’s really exciting. How does lessening
dependence on the grid impact issues such as sustainability and conserving
energy?
Sustainability surges forward
Panfil: We are
having such conversations with our key accounts virtually every day. What they
are saying is, “I am eventually not going to make smoke and steam. I want to
limit the number of times my system goes on a generator. So, I might put in more
batteries, more LIBs or TPPL batteries, in certain applications because if my TCO
is half the amount of the old way, I could potentially put in twice as much, and
have the same cost basis and get that economic benefit.”
And so from a sustainability
perspective, they are saying, “Okay, I might need at some point in the useful
life of that critical space to not draw what I think I need to draw from my
utility. I can limit the amount of power I draw from that utility.”
I love all of you out there in data center design, but most of them are designed for peak useage. These changes allow them to design more for the norm of the requirements. That means they can put in less infrastructure, less battery, to right-size their generators; same thing on cooling.
This is not a criticism, I
love all of you out there in data center design, but most of them are designed
for peak usage. So what these changes allow them to do is to design more for
the norm of the requirements. That means they can put in less infrastructure,
the potential to put in less battery. They have the potential to right-size
their generators; same thing on the cooling side, to right-size the cooling to
what they need and not for the extremes of what that data center is going to
see.
From a sustainability
perspective, we used to talk about the glass as half-full or half-empty. Now, we
say there is too much of a glass. Let’s right-size the glass itself, and then
all of the other things you have to do in support of that infrastructure are
reduced.
Madara: As we
look at the edge applications, many will not have backup generators. We will
have alternate energy sources, and we will probably be taking more hits to the
batteries. Is the LIB the better solution for that?
Panfil: Yes,
Steve, it sure is. We will see customers with an expectation of sustainability,
a path to an energy source that is not fossil fuel-based. That could be a renewable
energy source. We might not be able to deploy that today, but they can now deploy
what I call foundational technologies that allow them to take advantage of it.
If I can have a LIB, for example, that stores excess energy and allows me to
absorb energy when I’m creating more than I need -- then I can consume that
energy on the other side. It’s better for everybody.
Gardner: We are
entering an era where we have the agility to optimize utilization and reduce
our total costs. The thing is that it varies from region to region. There are
some areas where compliance is a top requirement. There are others where energy
issues are a top requirement because of cost.
What’s going on in terms of
global cross-pollination? Are we seeing different markets react to their power
and thermal needs in different ways? How can we learn from that?
Global differences, normalized
Madara: If
you look at the size of data centers around the world, the data centers in the
U.S. are generally much larger than in Europe. And what’s in Europe is much
larger than what we have in other developed countries. So, there are a couple
of things, as you mentioned, energy availability, cost of energy, the size of
the market and the users that it serves. We may be looking at more edge data
centers in very underserved markets that have been in underdeveloped countries.
So, you are going to see the
size of the data center and the technology used potentially different to better
fit needs of the specific markets and applications. Across the globe, certain
regions will have different requirements with regard to security and
sustainability.
Even though we have these
potential differences, we can meet the end-user needs to right-size the IT
resources in that region. We are all more common than we are different in many
respects. We all have needs for security, we all have needs for efficiency, it
may just be to different degrees.
Panfil: There are
different regional agency requirements, different governmental regulations that
companies have to comply with. And so what we find, Dana, is that what our
customers are trying to do is normalize their designs. I won’t say they are
standardizing their design because standardization says I am going to deploy
exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit
Kats are not the same globally, they vary by region, the same is true for data
centers.
So, when you look at how the
customers are trying to deal with the regional and agency differences that they
have to live with, what they find themselves doing is trying to normalize their
designs as much as they possibly can globally, realizing that they might not to
be able to use exactly the same power configuration or exactly the same thermal
configuration. But we also see pockets where different technologies are moving
to the forefront. For example, China has data centers that are running at high
voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the
Americas and in Europe. Customers are looking at three things -- speed, speed,
and speed.
And so when we look at the
application, for example, of DC, there used to be a debate, is it AC or DC?
Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for
example, in Asia are deploying high-voltage DC and have some form of hybrid AC
plus DC deployment. They are doing it so that they can speed their applications
deployments.
In the Americas, the Open Compute Project
(OCP) deploys either 12 or 48 volts to the rack. I look at it very simply.
We have been seeing a move from 2N architecture to N plus 1 architecture in the
power world for a decade, this is nothing more than adopting the N plus 1 architecture
at the rack level versus the 2N architecture at the rack level.
And so what we see is when
folks are trying to, number one, increase the speed; number two, increase their
utilization; number three, lower their total cost, they are going to deploy
infrastructures that are most advantageous for either the IT appliances that
they are deploying or for the IT applications that they are running, and it’s
not the same for everybody, right Steve?
You and I have been around the
planet way too many times, you are a million miler, so am I. It’s amazing how a
city might be completely different in a different time zone, but once you walk
into that data center, you see how very consistent they have gotten, even
though they have done it completely independently from anybody else.
Madara:
Correct!
Consistency lowers costs and
risks
Gardner: A lot
of what we have talked about boils down to a need to preserve speed-to-value
while managing total cost of utilization. What is there about these multiple
trends that people can consider when it comes to getting the right balance, the
right equilibrium, between TCO and that all important speed-to-value?
Madara: Everybody
strives to drive cost down. The more you can drive the cost down of the infrastructure,
the more you can do to develop more edge applications.
I think we are seeing a very
large rate of change of driving cost down. Yet we still have a lot of stranded
capacity out there in the marketplace. And people are making decisions to take
that down without impacting risk, but I think they can do it faster.
Standardization helps drive speed, whether it's normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every one is different.
Peter mentioned standardization.
Standardization helps drive speed, whether it’s normalization or similarity. What
allows people to move fast is to repeat what they are doing instead of snowflake
data centers, where every new one is different.
Repeating allows you to build
a supply base ecosystem where everybody has the same goal, knows what to do,
and can be partners in driving out cost and in driving speed. Those are some of
the key elements as we go forward.
Gardner: Peter
when we look to that standardization, you also allow for more seamless
communication from core to cloud to edge. Why is that important, and how can we
better add intelligence and seamless communication among and between all these
different distributed data centers?
Panfil: When
we normalize designs globally, we take a look at the regional differences, sort
out what the regional differences have to be, and then put a proof of concept
deployment. And out of that comes a consistent method of procedure.
When we talk about managing
the data center effectively and efficiently, first of all, you have to know
what you have. And second, you have to know what it’s doing. And so, we are
seeing more folks normalizing their designs and getting consistency. They can
then start looking at how much of their available capacity from a design
perspective they are actually using both on a normal basis and on a peak basis
and then they can determine how much of that they are willing to use.
We have some customers who are
very risk-averse. They stay in the 2N world, which is a 50 percent maximum
utilization. We applaud them for it because they are not going to miss a
transaction.
There are others who will say,
“I can live with the availability that an N+1 architecture gives me. I know I
am going to have to be prepared for more failures. I am going to have to figure
out how to mitigate those failures.”
So they are working constantly
at figuring out how to monitor what they have and figure out what the equipment
is doing, and how they can best optimize the performance. We talked earlier
about battery runtimes, for example. Sometimes they might get short or
sometimes they might be long.
As these companies get into
this step and repeat function, they are going to get consistency of their
methods of procedure. They’re going to get consistency of how their operations
teams run their physical infrastructure. They are going to think about running
their equipment in ways that is nontraditional today but will become the norm
in the next generation of data centers. And then they are going to look at us
and say, “Okay, now that I have normalized my design, can I use rapid
deployment configuration? Can I put it on a skid, in a container? Can I drop it
in place as the complete data center?”
Well, we build it one piece of
equipment at a time and stitch it all together. The question that you asked
about monitoring, it’s interesting because we talked to a major company just
last month. Steve and I were visiting them at their site. And they said, “You
know what? We spend an awful lot of time figuring out how our building
management system and our data exchange happens at the site. Could Vertiv do
some of that in the factory? Could you configure our data acquisition systems?
Could you test them there in the factory? Could we know that when the stuff
shows up on site that it’s doing the things that it’s supposed to be doing
instead of us playing hunt and peck to figure out what the issues are?”
We said, “Of course.” So we
are adding that capability now into our factory testing environment. What we
see is a move up the evolutionary scale. Instead of buying separate boxes, we
are seeing them buying solutions -- and those solutions include both monitoring
and controls.
Steve didn’t even get a chance
to mention the industry-leading Vertiv
Liebert® iCOM™ control for thermal. These controls and monitoring systems
allow them to increase their utilization rates because they know what they have
and what it’s doing.
Gardner: It
certainly seems to me, with all that we have said today, that the data center
status quo just can’t stand. Change and improvement is inevitable. Let’s close
out with your thoughts on why people shouldn’t be standing still; why it’s just
not acceptable.
Innovation is inevitable
Madara: At
the end of the day, the IT world is changing rapidly every day. Whether in the
cloud or down at the edge, the IT world needs to adjust to those needs. They
need to be able to be cut out enough of the cost structure. There is always a
demand to drive cost down.
If we don’t change with the
world around us, if we don’t meet the requirements of our customers, things
aren’t going to work out – and somebody else is going to take it and go for it.
Panfil:
Remember, it’s not the big that eats the small, it’s the fast that eats the
slow.
Madara: Yes,
right.
Panfil: And
so, what I have been telling folks is, you got to go. The technology is there.
The technology is there for you to cut your cost, improve your speed, and
increase utilization. Let’s do it. Otherwise, somebody else is going to do it
for you.
Gardner: I’m
afraid we’ll have to leave it there. We have been exploring the forces shaping
data center decisions and how that’s extending compute resources to new places
with the challenging goals of speed, agility, and efficiency.
And we have learned how
enterprises and service providers alike are seeking new balance between the
need for low latency and optimal utilization of workload placement. So please
join me in thanking our guests, Peter Panfil, Vice President of Global Power at
Vertiv. Thank you so much, Peter.
Panfil:
Thanks for having me. I appreciate it.
Gardner: And
we have also been joined by Steve Madara, Vice President of Global Thermal at
Vertiv. Thanks so much, Steve.
Madara: You’re
welcome, Dana.
Gardner: And a
big thank you as well to our audience for joining us for this sponsored
BriefingsDirect data centers strategies interview. I’m Dana Gardner, Principal
Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored
discussions.
Thanks again for listening.
Please pass this along to your community, and do come back next time.
A discussion with two leading IT and critical infrastructure
executives on how the state of data centers in
2020 demands better speed, agility, and efficiency from IT resources wherever
they reside. Copyright Interarbor Solutions, LLC, 2005-2020. All rights
reserved.