Friday, November 08, 2019

The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Data Center-as-a-Service

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data center strategies.

Gardner
There has never been a better time to build an efficient, protected, powerful, contained, and modular data center -- yet many enterprises and public sector agencies cling to aging, vulnerable, and chaotic legacy IT infrastructure.

Stay with us now as we examine how automation, self-healing, and increasingly intelligent data center designs and components are delivering what amounts to data centers-as-a-service.

Here to help us learn more about a modern data center strategy that extends to the computing edge -- and beyond -- is Steve Lalla, Executive Vice President of Global Services at Vertiv. Welcome, Steve.


Steve Lalla: Thank you, Dana.

Gardner: Steve, when we look at the evolution of data center infrastructure, monitoring, and management software and services, they have come a long way. What’s driving the need for change now? What’s making new technology more pressing and needed than ever?

Lalla
Lalla: There are a number of trends taking place. The first is the products we are building and the capabilities of those products. They are getting smarter. They are getting more enabled. Moore’s Law continues. What we are able to do with our individual products is improving as we progress as an industry.

The other piece that’s very interesting is it’s not only how the individual products are improving, but how we connect those products together. The connective tissue of the ecosystem and how those products increasingly operate as a subsystem is helping us deliver differentiated capabilities and differentiated performance.

So, data center infrastructure products are becoming smarter and they are becoming more interconnected.

Interconnectivity across ecosystems 

The second piece that’s incredibly important is broader network connectivity -- whether it’s wide area connectivity or local area connectivity. Over time, all of these products need to be more connected, both inside and outside of the ecosystem. That connectivity is going to enable new services and new capabilities that don’t exist today. Connectivity is a second important element.

Third, data is exploding. As these products get smarter, work more holistically together, and are more connected, they provide manufacturers and customers more access to data. That data allows us to move from a break/fix type of environment into a predictive environment. It’s going to allow us to offer more just-in-time and proactive service versus reactive and timed-based services.

And when we look at the ecosystems themselves, we know that over time these centralized data centers -- whether they be enterprise data centers, colocation data centers, or cloud data centers -- are going to be more edge-based and module-based data centers.

And as that occurs, all the things we talked about -- smarter products, more connectivity, data and data enablement -- are going to be more important as those modular data centers become increasingly populated in a distributed way. To manage them, to service them, is going to be increasingly difficult and more important.
A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.

And one final cultural piece is happening. A lot of the folks who interact with these products and services will face what I call knowledge thinning. The highly trained professionals -- especially on the power side of our ecosystem -- that talent is reaching retirement age and there is a high demand for their skills. As data center growth continues to be robust, that knowledge thinning needs to be offset with what I talked about earlier.

So there are a lot of really interesting trends under way right now that impact the industry and are things that we at Vertiv are looking to respond to.

Gardner: Steve, these things when they come together form, in my thinking, a whole greater than the sum of the parts. When you put this together -- the intelligence, efficiency, more automation, the culture of skills -- how does that lead to the notion of data center-as-a-service?

Lalla: As with all things, Dana, one size does not fit all. I’m always cautious about generalizing because our customer base is so diverse. But there is no question that in areas where customers would like us to be operating their products and their equipment instead of doing it themselves, data center-as-a-service reduces the challenges with knowledge thinning and reduces the issue of optimizing products. We have our eyes on all those products on their behalf.

And so, through the connectivity of the product data and the data lakes we are building, we are better at predicting what should be done. Increasingly, our customers can partner with us to deliver a better performing data center.

Gardner: It seems quite compelling. Modernizing data centers means a lot of return on investment (ROI), of doing more with less, and becoming more predictive about understanding requirements and then fulfilling them.

Why are people still stuck? What holds organizations back? I know it will vary from site to site, but why the inertia? Why don’t people run to improve their data centers seeing as they are so integral to every business? 

Adoption takes time

Lalla: Well, these are big, complex pieces of equipment. They are not the kind of equipment that every year you decide to change. One of the key factors that affects the rate at which connectivity, technology, processing capability, and data liberation capability gets adopted is predicated by the speed at which customers are able to change out the equipment that they currently have in their data centers.

Now, I think that we, as a manufacturer, have a responsibility to do what we can to improve those products over time and make new technology solutions backward compatible. That can be through updating communication cards, building adjunct solutions like we do with Liebert® ICOMTM-S and gateways, and figuring out how to take equipment that is going to be there for 15 or 20 years and make it as productive and as modern as you can, given that it’s going to be there for so long.

So number one, the duration of product in the environment is certainly one of the headwinds, if you will.

https://www.vertiv.com/en-us/products-catalog/thermal-management/thermal-control-and-monitoring/liebert-icom-s-thermal-system-supervisory-control2/

Another is the concept of connectivity. And again, different customers have different comfort levels with connectivity inside and outside of the firewall. Clearly the more connected we can be with the equipment, the more we can update the equipment and assess its performance. Importantly, we can assess that performance against a big data lake of other products operating in an ecosystem. So, I think connectivity, and having the right solutions to provide for great connectivity, is important.

And there are cultural elements to our business in that, “Hey, if it works, why change it, right?” If it’s performing the way you need it to perform and it’s delivering on the power and cooling needs of the business, why make a change? Again, it’s our responsibility to work with our customers to help them best understand that when new technology gets added -- when new cards get added and when new assistants, l call them digital assistants, get added -- that that technology will have a differential effect on the business.

So I think there is a bit of reality that gets in the way of that sometimes.

Gardner: I suppose it’s imperative for organizations like Vertiv to help organizations move over that hump to get to the higher-level solutions and overcome the obstacles because there are significant payoffs. It also sets them up to be much more able to adapt to the future when it comes to edge computing, which you mentioned, and also being a data-driven organization.

How is Vertiv differentiating yourselves in the industry? How does combining services and products amount to a solution approach that helps organizations modernize.

Three steps that make a difference

Lalla: I think we have a differentiated perspective on this. When we think about service, and we think about technology and product, we don’t think about them as separate. We think about them altogether. My responsibility is to combine those software and service ecosystems into something more efficient that helps our customers have more uptime, and it becomes more predictive versus break/fix to just-in-time-types of services.
We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.

And the way we do that is through three steps. Number one, we have to continue to work closely with our product teams to ensure early in the product definition cycle which products need to be interconnected into an as-a-service or a self-service ecosystem.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what, in fact, we can do once data and information gets liberated. A great strategy always starts with great product, and that’s core to our solution.

The next step is a clear understanding that some of our customers want to service equipment themselves. But many of our customers want us to do that for them, whether it’s physically servicing equipment or monitoring and managing the equipment remotely, such as with our LIFETM management solution.

We are increasingly looking at that as a continuum. Where does self-service end, and where do delivered services begin? In the past it’s been relatively different in what we do -- from a self-service and delivered service perspective. But increasingly, you see those being blended together because customers want a seamless handover. When they discover something needs to be done, we at Vertiv can pick up from there and perform that service.

So the connective tissue between self-service and Vertiv-delivered service is something that we are increasing clarity on.

And then finally, we talked about this earlier, we are being very active at building a data lake that comes from all the ecosystems I just talked about. We have billions of rows of normalized data in our data lake to benefit our customers as we speak.

Gardner: Steve, when you service a data center at that solution-level through an ecosystem of players, it reminds me of when IT organizations started to manage their personal computers (PCs) remotely. They didn’t have to be on-site. You could bring the best minds and the best solutions to bear on a problem regardless of where the problem was -- and regardless of where the expertise was. Is that what we are seeing at the data center level?

Self-awareness remotely and in-person

Lalla: Let’s be super clear, to upgrade the software on an uninterruptible power supply (UPS) is a lot harder than to upgrade software on a PC. But the analogy of understanding what must be done in-person and what can be done remotely is a good one. And you are correct. Over years and years of improvement in the IT ecosystems, we went from a very much in-person type of experience, fixing PCs, to one where very much like mobile phones, they are self-aware and self-healing.

This is why I talked about the connectivity imperative earlier, because if they are not connected then they are not aware. And if they are not aware, they don’t know what they need to do. And so connectivity is a super important trend. It will allow us to do more things remotely versus always having to do things in-person, which will reduce the amount of interference we, as a provider of services, have on our customers. It will allow them to have better uptime, better ongoing performance, and even over time allow tuning of their equipment.
You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.

We are at the early stages of that journey. You could argue the mobile phone and the PC guys are at the very late stages of their journey of automation. We are in the very early stages of it, but the things we talked around earlier -- smarter products, connectivity, and data -- all are important factors influencing that.

Gardner: Another evolution in all of this is that there is more standardization, even at the data center level. We saw standardization as a necessary step at the server and storage level -- when things became too chaotic, too complex. We saw standardization as a result of virtualization as well. Is there a standardization taking place within the ecosystem and at that infrastructure foundation of data centers?

Standards and special sauce

Lalla: There has been a level of standardization in what I call the self-service layer, with protocols like BACnet, Modbus, and SNMP. Those at least allow a monitoring system to ingest information and data from a variety of diverse devices for minimally being able to monitor how that equipment is performing.

I don’t disagree that there is an opportunity for even more standardization, because that will make that whole self-service, delivered-as-a-service ecosystem more efficient. But what we see in that control plane is really Vertiv’s unique special sauce. We are able to do things between our products with solutions – like Liebert ICOM-S -- that allow our thermal products to work better together than if they were operating independently.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

You are going to see an evolution of continued innovation in peer-to-peer networking in the control plane that probably will not be open and standard. But it will provide advances in how our products work together. You will see in that self-service, as-a-service, and delivered-service plane continued support for open standards and protocols so that we can manage more than just our own equipment. Then our customers can manage and monitor more of their own equipment.

And this special sauce, which includes the data lakes and algorithms -- a lot of intellectual property and capital in building those algorithms and those outcomes -- help customers operate better. We will probably stay close to the vest in the short term, and then we’ll see where it goes over time.

Gardner: You earlier mentioned moving data centers to the edge. We are hearing an awful lot architecturally about the rationale for not moving the edge data to the cloud or the data center, but instead moving the computational capabilities right out to the edge where that data is. The edge is where the data streams in, in massive quantities, and needs to be analyzed in real-time. That used to be the domain of the operational technology (OT) people.

As we think about data centers moving out to the edge, it seems like there’s a bit of an encroachment or even a cultural clash between the IT way of doing things and the OT way of doing things. How does Vertiv fit into that, and how does making data center-as-a-service help bring the OT and IT together -- to create a whole greater than the sum of the parts?

OT and IT better together 

Lalla: I think maybe there was a clash. But with modular data centers and things like SmartAisle and SmartRow that we do today, they could be fully contained, standalone systems. Increasingly, we are working with strategic IT partners on understanding how that ecosystem has to work as a complete solution -- not with power and cooling separate from IT performance, but how can we take the best of the OT world power and cooling and the best of the IT world and combine that with things like alarms and fire suppression. We can build a remote management and monitoring solution that can be outsourced if you wanted, to consume it as a service, or in-sourced if you want to do it yourself.


And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.

Gardner: I was thinking that the OT-IT thing was just an issue at the edge. But it sounds like you’re also referring to it within the data center itself. So flesh that out a bit. How do OT and IT together -- managing all the IT systems, components, complexity, infrastructure, support elements -- work in the intelligent, data center-as-a-service approach?

Lalla: There is the data center infrastructure management (DCIM) approach, which says, “Let’s bring it all together and manage it.” I think that’s one way of thinking about OT and IT, and certainly Vertiv has solutions in that space with products like TrellisTM.

But I actually think about it as: Once the data is liberated, how do we take the best of computing solutions, data analytics solutions, and stuff that was born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in our industrial OT space?

It’s not necessarily that OT and IT are one thing, but how do we apply the best of all of technology solutions? Things like security. There is a lot of great stuff that’s emerged for security. How do we take a security-solutions perspective in the IT space if we are going to get more connected in the OT space? Well, let’s learn from what’s going on in IT and see how we can apply it to OT.
Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.

Just because DCIM has been tackled for years doesn’t mean we can’t take more of the best of each world and see how you can put those together to provide a solution that’s differentiated.

I go back to the Liebert ICOM-S solution, which uses desktop computing and gateway technology, and application development running on a high-performance IT piece of gear, connected to OT gear to get those products that normally would work separately to actually work more seamlessly together. That provides better performance and efficiency than if those products operated separately.

Liebert ICOM-S is a great example of where we have taken the best of the IT world compute technology connectivity and the best of the OT world power and cooling and built a solution that makes the interaction differentiated in the marketplace.

Gardner: I’m glad you raised an example because we have been talking at an abstract level of solutions. Do you have any other use cases or concrete examples where your concept for infrastructure data center-as-a-service brings benefits? When the rubber hits the road, what do you get? Are there some use cases that illustrate that? 

Real LIFE solutions

Lalla: I don’t have to point much further than our Vertiv LIFE Services remote monitoring solution. This solution came out a couple years ago, partly from our Chloride® Group acquisition many years ago. LIFE Services allows customers to subscribe to have us do the remote monitoring, remote management, and analytics of what’s happening -- and whenever possible do the preventative care of their networks.

And so, LIFE is a great example of a solution with connectivity, with the right data flowing from the products, and with the right IT gear so our personnel take the workload away from the customer and allow us to deliver a solution. That’s one example of where we are delivering as-a-service for our customers.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/
We are also working with customers -- and we can’t expose who they are -- to bring their data into our large data lake so we can help them better predict how various elements of their ecosystem will perform. This helps them better understand when they need just-in-time service and maintenance versus break/fix service and maintenance.

These are two different examples where Vertiv provides services back to our customers. One is running a network operations center (NOC) on their behalf. Another uses the data lake that we’ve assimilated from billions of records to help customers who want to predict things and use the broad knowledge set to do that.

Gardner: We began our conversation with all the great things going on in modern data center infrastructure and solutions to overcome obstacles to get there, but economics plays a big role, too. It’s always important to be able to go to the top echelon of your company and say, “Here is the math, here’s why we think doing data center modernization is worth the investment.”

What is there about creating that data lake, the intellectual property, and the insights that help with data center economics? What’s the total cost of ownership (TCO) impact? How do you know when you’re doing this right, in terms of dollars and cents?

Uptime is money

Lalla: It’s difficult to generalize too much but let me give you some metrics we care about. Stuff is going to break, but if we know when it’s going to break -- or even if it does break -- we can understand exactly what happened. Then we can have a much higher first-time fix rate. What does that mean? That means I don’t have to come out twice, I don’t have to take the system out of commission more than once, and we can have better uptime. So that’s one.

Number two, by getting the data we can understand what’s going on with the network time-to-repair and how long it takes us from when we get on-site to when we can fix something. Certainly it’s better if you do it the first time, and it’s also better if you know exactly what you need when you’re there to perform the service exactly the way it needs to be done. Then you can get in and out with minimal disruption.

A third one that’s important -- and one that I think will grow in importance -- is we’re beginning to measure what we call service avoidance. The way we measure service avoidance is we call up a customer and say, “Hey, you know, based on all this information, based on these predictions, based on what we see from your network or your systems, we think these four things need to be addressed in the next 30 days. If not, our data tells us that we will be coming out there to fix something that broken as opposed to fixing it before it breaks.” So service avoidance or service simplification is another area that we’re looking at.

There are many more -- I mean, meeting service level agreements (SLAs), uptime, and all of those -- but when it comes to the tactical benefits of having smarter products, of being more connected, liberating data, and consuming that data and using it to make better decisions as a service -- those are the things that customers should expect differently.

Gardner: And in order to enjoy those economic benefits through the Vertiv approach and through data center-as-a-service, does this scale down and up? It certainly makes sense for the larger data center installations, but what about a small- to medium-sized business (SMB)? What about a remote office, or a closet and a couple of racks? Does that make sense, too? Do the economic and the productivity benefits scale down as well scale up?

Lalla: Actually when we look at our data, more customers who don’t have all the expertise to manage and monitor their single-phase or small three-phase or Liebert CRV [cooling] units, and they don’t have the skill set -- those are the customers that really appreciate what we can do to help them. It doesn’t mean that they don’t appreciate it as you go up the stack, because as you go up the stack what those customers appreciate isn’t the fact that they can do some of the services themselves. They may be more of a self-service-oriented customer, but what they increasingly are interested in is how we’re using data in our data lake to better predict things that they can’t predict by just looking at their own stuff.

https://www.vertiv.com/
So, the value shifts depending on where you are in the stack of complexity, maturity, and competency. It also varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale. There are a number of variables so that’s why it’s difficult to generalize. But this is why the themes of productivity, smarter products, edge ecosystems, and data liberation are common across all those segments. How they apply the value that’s extracted in each segment can be slightly different.

Gardner: Suffice it to say data center-as-a-service is highly customizable to whatever organization you are and wherever you are on that value chain.

Lalla: That’s absolutely right. Not everybody needs everything. Self-service is on one side and as-a-service is on the other. But it’s not a binary conversation.

Customers who want to do most of the stuff themselves with technology, they may need only a little information or help from Vertiv. Customers who want most of their stuff to be managed by us -- whether it’s storage systems or large systems -- we have the capability of providing that as well. This is a continuum, not an either-or.

Gardner: Steve, before we close out, let’s take a look to the future. As you build data lakes and get more data, machine learning (ML) and artificial intelligence (AI) are right around the corner. They allow you to have better prediction capabilities, do things that you just simply couldn’t have ever done in the past.

So what happens as these products get smarter, as we are collecting and analyzing that data with more powerful tools? What do you expect in the next several years when it comes to the smarter data center-as-a-service?

Circle of knowledge gets smart 

Lalla: We are in the early stages, but it’s a great question, Dana. There are two outcomes that will benefit all of us. One, that information with the right algorithms, analysis, and information is going to allow us to build products that are increasingly smarter.

There is a circle of knowledge. Products produce information going to the data lake, we run the right algorithms, look for the right pieces of information, feed that back into our products, and continually evolve the capability of our products as time goes on. Those products will break less, need less service, and are more reliable. We should just expect that, just as you have seen in other industries. So that’s number one.

Number two, my hope and belief are that we move from a break-fix mentality or environment of where we wait for something to show up on a screen as an alarm or an alert. We move from that to being highly predictive and just-in-time.

As an industry -- and certainly at Vertiv -- first-time fix, service avoidance, and time for repair are all going to get much better, which means one simple thing for our customers. They are going to have more efficient and well-tuned data centers. They are going to be able to operate with higher rates of uptime. All of those things are going to result in goodness for them -- and for us.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how automation, self-healing, and increasingly intelligent data center designs are delivering what amounts to data centers-as-a-service. And we’ve learned how modern data center strategies will extend to the computing edge and beyond.

So please join me in thanking our guest, Steve Lalla, Executive Vice-President of Global Services at Vertiv. Thank you so much, Steve.


Lalla: Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect data center strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.

Thanks again for listening. Please pass this along to your community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Tuesday, October 29, 2019

How Unisys and Dell EMC Together Head Off Backup Storage Cyber Security Vulnerabilities

https://www.unisys.com/offerings/security-solutions

A discussion how backup storage needs to be made safe and secure, too, especially if companies need to quickly right themselves after an attack.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you are listening to BriefingsDirect. New threats to data security are emerging all the time. Bad players constantly seek new ways to get at and exploit sensitive data sources.

Gardner
This next BriefingsDirect data security insights discussion explores how data, from one end of its life cycle to the other, needs new protection and a means for rapid recovery.

Stay with us as we examine how backup storage especially needs to be made safe and secure if companies want to quickly right themselves from an attack. To learn more, please welcome Andrew Peters, Stealth Industry Director at Unisys. Welcome, Andrew.

Andrew Peters: Thank you.

Gardner: We’re also here with George Pradel, Senior Systems Engineer at Dell EMC. Welcome, George.


George Pradel: Hi, Dana. Thanks for having us.

Gardner: Andrew, what’s changed in how data is being targeted? How are things different from three years ago?

Peters
Peters: Well, one major thing that’s changed in the recent past has been the fact that the bad guys have found out how to monetize and extort money from organizations to meet their own ends. This has been something that has caught a lot of companies flatfooted -- the sophistication of the attacks and the ability to extort money out of organizations.

Gardner: George, why does all data -- from one end of its life cycle to the other --now need to be reexamined for protection?

Pradel: Well, Andrew brings up some really good points. One of the things we have seen out in the industry is ransomware-as-a-service. Folks can just dial that in. There are service level agreements (SLAs) on it. So everyone’s data now is at risk.

Another of the things that we have seen with some of these attacks is that these people are getting a lot smarter. As soon as they go in to try and attack a customer, where do they go first? They go for the backups. They want to get rid of those, because that’s kind of like the 3D chess where you are playing one step ahead. So things have changed quite a bit, Dana.

Peters: Yes, it’s really difficult to put the squeeze on an organization knowing that they can recover themselves with their backup data. So, the heat is on the bad guys to go after the backup systems and pollute that with their malware, just to keep companies from having the capability to recover themselves.

Gardner: And that wasn’t the case a few years ago?

Pradel
Pradel: The attacks were so much different a few years ago. They were what we call script kiddie attacks, where you basically get some malware or maybe you do a denial-of-service attack. But now these are programmatized, and the big thing about that is if you are a target once, chances are really good that the thieves are just going to keep coming back to you, because it’s easy money, as Andrew pointed out.

Gardner: How has the data storage topology changed? Are organizations backing up differently than they did a few years ago as well? We have more cloud use, we have hybrid, and different strategies for managing de-dupe and other redundancies. How has the storage topology and landscape changed in a way that affects this equation of being secure end to end?

The evolution of backup plans 

Pradel: Looking at how things have changed over the years, we started out with legacy systems, the physical systems that many of us grew up with. Then virtualization came into play, and so we had to change our backups. And virtualization offered up some great ways to do image-level backups and such.

Now, the big deal is cloud. Whether it’s one of the public cloud vendors, or a private cloud, how do we protect that data? Where is our data residing? Privacy and security are now part of the discussion when creating a hybrid cloud. This creates a lot of extra confusion -- and confusion is what thieves zone in on.

We want to make sure that no matter where that data resides that we are making sure it’s protected. We want to provide a pathway for bringing back the data that is air gapped or via one of our other technologies that helps keeps the data in a place that allows for recoverability. Recoverability is the number one thing here, but it definitely has changed in these last few years.

Gardner: Andrew, what do you recommend to customers who may have thought that they had this problem solved? They had their storage, their backups, they protected themselves from the previous generations of security risk. When do you need to reevaluate whether you are secure enough?

Stay prepared 

Peters: There are a few things to take into consideration. One, they should have an operation that can recover their data and bring their business back up and running. You could get hit with an attack that turns into a smoking hole in the middle of your data center. So how do you bring your organization back from that without having policies, guidance, a process and actual people in place in the systems to get back to work?

Learn More About Cyber Recovery
With Unisys Stealth
Another thing to consider is the efficacy of the data. Is it clean? If you are backing up data that is already polluted with malware, guess what happens when you bring it back out and you recover your systems? It rehydrates itself within your systems and you still have the same problem you had before. That’s where the bad guys are paying attention. That’s what they want to have happen in an organization. It’s a hand they can play.

If the malware can still come out of the backup systems and rehydrate itself and re-pollute the systems when an organization is going through its recovery, it’s not only going to hamper the business and the time to recovery, and cost them, it’s also going to force them to pay the ransoms that the bad guys are extorting.

Gardner: And to be clear, this is the case across both the public and the private sector. We are hearing about ransomware attacks in lots of cities and towns. This is an equal opportunity risk, isn’t it?

Peters: Malware and bad guys don’t discriminate.

Pradel: You are exactly right about that. One of the customers that I have worked with recently in a large city got hit with a ransomware attack. Now, one of the things about ransomware attacks is that they typically want you to pay in bitcoin. Well, who has $100,000 worth of bitcoin sitting around?
If you have a government attacked, one of the problems is that chaos ensues. Police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over.

But let’s take a look at why it’s so important to eliminate these types of attacks. If you have a government attacked, one of the problems is that chaos ensues. In one particular situation, police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over, to see if they had a couple of bad tickets or perhaps the person was wanted for some reason. And so it is a very dangerous situation you may put into play for all of these officers.

That’s one tiny example of how these things can proliferate. And like you said, whether it’s public sector or private sector, if you are a soft target, chances are at some point you are going to get hit with ransomware.

Secure the perimeter and beyond 

Gardner: What are we doing differently in terms of the solutions to head this off, especially to get people back and up and running and to make sure that they have clean and useable data when they do so?

Peters: A lot of security had been predicated on the concept of a perimeter, something where we can put up guards, gates, and guns and in a moat. There is an inside and an outside, and it’s generally recognized today that that doesn’t really exist.

And so, one of the new moves in security is to defend the endpoint, the application, and to do that using a technology called micro-segmentation. It’s becoming more popular because it allows us to have a security perimeter and a policy around each endpoint. And if it’s done correctly, you can scale to hundreds to thousands to hundreds of thousands, and potentially millions of endpoint devices, applications, servers and virtually anything you have in an environment.

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off

And so that’s one big change: Let’s secure the endpoint, the application, the storage, and each one comes with its own distinct security policy.

Gardner: George, how do you see the solutions changing, perhaps more toward the holistic infrastructure side and not just the endpoint issues?

Pradel: One of the tenets that Andrew related to is called security by obscurity. The basic tenet is, if you can’t see it’s much safer. Think about a safe in your house. If the safe is back behind the bookcase and you are the only person that knows it’s there, that’s an extra level of security. Well, we can do that with technology.

So you are seeing a lot of technologies being employed. Many of them are not new types of security technologies. We are going back to what’s worked in the past and building some of these new technologies on that. For example, we add on automation, and with that automation we can do a lot of these things without as much user intervention, and so that’s a big part of this.

Incidentally, if any type of security that you are using has too much user intervention, then it’s very hard for the company to cost-justify those types of resources.

Gardner: Something that isn’t different from the past is having that Swiss Army knife approach of multiple layers of security. You use different tools, looking at this as a team sport where you want to bring as many solutions as possible to bear on the problem.

How have Unisys and Dell EMC brought different strengths together to create a whole greater than the sum of the parts?

Hide the data, so hackers can’t seek

Peters: One thing that’s fantastic that Dell has done is that they have put together a Cyber Recovery solution so when there is a meltdown you have gold copies of critical data required to reestablish the business and bring it back up and get into operation. They developed this to be automated, to contain immutable copies of data, and to assure the efficacy of the data in there.

Now, they have set this stuff up with air gapping, so it is virtually isolated from any other network operations. The bad guys hovering around in the network have a terrible time of trying to even touch this thing.
Learn More About Dell EMC PowerProtect
Cyber Recovery Solution
Unisys put what we call a cryptographic wrapper around that using our micro-segmentation technology called Stealth. This creates a cryptographic air gap that virtually disappears that vault and its recovery operations from anything else in the network, if they don’t have a cryptographic key. If they have a cryptographic key that was authorized, they could talk to it. If they don’t, they can’t. So any bad guys and malware can’t see it. If they can’t see, they can’t touch, and they can’t hack. This then turns into an extraordinarily secure means to recover an organization’s operations.

Gardner: The economics of this is critical. How does your technology combination take the economic incentive away from these nefarious players?

Pradel: Number one, you have a way to be able to recover from this. All of a sudden the bad guys are saying, “Oh, shoot, we are not going to get any money out of these guys.”


You are not going to be a constant target. They are going to go after your backups. Unisys Stealth can hide the targets that these people go after. Once you have this type of a Cyber Recovery solution in place, you can rest a lot easier at night.

As part of the Cyber Recovery solution, we actually expect malware to get into the Cyber Recovery vault. And people shake their head and they go, “Wait, George, what do you mean by that?”

Yes, we want to get malware into the Cyber Recovery vault. Then we have ways to do analytics to see whether our point-in times are good. That way, when we are doing that restore, as Andrew talked about earlier, we are restoring a nice, clean environment back to the production environment.

Recovery requires commitment, investment 

So, these types of solutions are an extra expense, but you have to weigh the risks for your organization and factor what it really costs if you have a cyber recovery incident.

Additionally, some people may not be totally versed on the difference between a disaster recovery situation and a cyber recovery situation. A disaster recovery may be from some sort of a physical problem, maybe a tornado hits and wipes out a facility or whatever. With cyber recovery, we are talking about files that have been encrypted. The only way to get that data back -- and get back up and running -- is by employing some sort of a cyber recovery solution, such as the Unisys and Dell EMC solution.

Gardner: Is this tag team solution between Unisys and Dell EMC appropriate and applicable to all kinds of business, including cloud providers or managed service providers?

Peters: It’s really difficult to measure the return on investment (ROI) in security, and it always has been. We have a tool that we can use to measure risk, probability, and financial exposure for an organization. You can actually use the same methodologies that insurance companies use to underwrite for things like cybersecurity and virtually anything else. It’s based on the reality that there is a strong likelihood that there is going to be a security breach. There is going to be perhaps a disastrous security breach, and it’s going to really hurt the organization.
Plan on the fact that it's probably going to happen. You need to invest in your systems and your recovery. If you think you can sustain a complete meltdown on your company and go out of operations for weeks to months, then you probably don't need to put money into it.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think that you can sustain a complete meltdown on your company and go out of operation for weeks to months, then you probably don’t need to put money into it.

If you understand how exposed that you potentially are, and the fact that the bad guys are staring at the low hanging fruit -- which may be state governments, or cities, or other things that are less protected.

The fact is, the bad guys are extraordinarily patient. If your payoff is in the tens of millions of dollars, you might spend, as the bad guys did with Sony, years mapping systems, learning how an operation works, and understanding their complete operations before you actually take action, and in potentially the most disastrous way possible.

So ergo, it’s hard to put a number on that. An organization will have to decide how much they have to lose, how much they have at risk, and what the probability is that they are actually going to get hit with an attack.

Gardner: George, also important on this applicability as to where it’s the right fit is that automation and skills. What sort of organizations typically will go at this and what skills are required?

Automate and simplify 

Pradel: That’s been the basis for our Cyber Recovery solution. We have written a number of APIs to be able to automate different pieces of a recovery situation. If you have a cyber recovery incident, it’s not a matter of just, “Okay, I have the data, now I can restore it.” We have a lot of experts in the field. What they do is figure out exactly where the attack came from, how it came in, what was affected, and those types of things.

We make it as simple as possible for the administration. We have done a lot of work creating APIs that automate items such as recovering backup servers. We take point-in-time copies of the data. I don’t want to go into it too deeply, but our data domain technology is the basis for this. And the reason why it’s important to note is because the replication we do is based upon our variable-length deduplication.

Now, that may sound a little gobbledygook, but what that means is that we have the smallest replication times that you could have for a certain amount of data. So when we are taking data into the Cyber Recovery vault, we are reducing what’s called our dwell time. This is the area where you would have someone that could see that you had a connection open.
Learn More About Cyber Recovery
With Unisys Stealth
But a big part of this is on a day-to-day basis, I don’t have to be concerned. I don’t have a whole team of people that are maintaining this Cyber Recovery vault. Typically, with our customers, they already have the understanding of how our base technology works and so that part is very straightforward. And what we have is automation, we have policies that are set up in the Cyber Recovery vault that will, on a regular basis, hold the data, whatever is changed from the production environment, typically once a day.

And a rule of thumb for some people that might be thinking, this sounds really interesting, but how much data would I put in this? Typically we’ll do 10 to 15 percent of a customer’s production environment, that might go into the Cyber Recovery vault. So we want to make this as simple as possible, we want to automate as much as possible.

And on the other side, when there is an incident, we want to be able to also automate that part because that is when all heck is going on. If you’ve ever been involved in one of those situations, it’s not always your clearest thinking moment. So automation is your best friend and can help you get back up and running as quickly as possible.

Gardner: George, run us through an example, if you would, of how this works in the real-world.

One step at a time for complete recovery 

Pradel: What will happen is that at some point somebody clicks on that doggone attachment that was on that e-mail that had a free trip to Hawaii or something and it had a link to some ransomware.

Once the security folks have determined that there has been an attack, sometimes it’s very obvious. There is one attack where there is a giant security skeleton that comes up on your screen and basically says, “Got you.” It then gives instructions on how you would go about sending them the money so that you can get your data back.

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off

However, sometimes it’s not quite so obvious. Let’s say your security folks have determined there has been attack and then the first thing that you would want to do is access the cyber recovery provided by putting the Cyber Recovery vault with Stealth. You would go to the Cyber Recovery vault and lock down the vault, and it’s simple and straightforward. We talked about this a little earlier about the way we do the automation is you click on the lock, that locks everything down and it stops any future replications from coming in.

And while the security team is looking to find out how bad it is, what was affected, one of the things the cyber recovery team does is to go in and run some analysis, if you haven’t done so already. You can automate this type of analysis, but let’s say you haven’t done that. Let’s say you have 30 point-in times, so one for each day throughout the last month. You might want to check and run an analysis against maybe the last five of those to be able to see whether or not those come up as suspicious or as okay.

The way that’s done is to look at the entropy of the different point-in-time backups. One thing to note is that you do not have to rehydrate the backup in order to analyze it. So let’s say you backed it up with Avamar and then you wanted to analyze that backup. You don’t have to rehydrate that in the vault in order to get it back up and running.

https://www.unisys.com/offerings/security-solutions
Once that’s done, then there’s a lot of different ways that you can decide what to do. If you have physical machines but they are not in great shape, they are suspect in that. But, if the physical parts of it are okay, you could then decide that at some point you’re going to reload those machines with the gold copies or very typical to have in the vault and then put the data and such on it.

If you have image-level backups that are in the vault, those are very easy to get back up and running on a VMWare ESX host store, or Microsoft Hyper-V host that you have in your production environment. So, there are a lot of different ways that you can do that.

The whole idea, though, is that our typical Cyber Recovery solution is air-gapped and we recommend customers have a whole separate set of physical controls as well as the software controls.

Now, one of those steps may not be practical in all situations. That’s why we looked at Unisys Stealth, to provide a virtual air gap by installing the pieces from Stealth.

Remove human error 

Peters: One of the things I learned in working with the United States Air Force’s Information Warfare Center was the fact that you can build the most incredibly secure operation in the world and humans will do things to change it.

With Stealth, we allow organizations to be able to get access into the vault from a management perspective to do analytics, and also from a recovery perspective, because anytime there’s a change to the way that vault operates, that’s an opportunity for bad guys to find a way in. Because, once again, they’re targeting these systems. They know they’re there; they could be watching them and they can be spending years doing this and watching the operations.

Unisys Stealth removes the opportunity for human error. We remove the visibility that any bad guys, or any malware, would have inside a network to observe a vault. They may see data flowing but they don’t know what it’s going to, they don’t know what it’s for, they can’t read it because it’s going to be encrypted. They are not going to be able to even see the endpoints because they will never be able to get an address on them. We are cryptographically disappearing or hiding or cloaking, whatever word you’d like to use -- we are actively removing those from visibility from anything else on the network unless it’s specifically authorized.

Gardner: Let’s look to the future. As we pointed out earlier in our discussion, there is a sort of a spy versus spy, dog chasing the cat, whatever you want to use as a metaphor, one side of the battle is adjusting constantly and the other is reacting to that. So, as we move to the future, are there any other machine learning (ML)-enabled analytics on these attacks to help prevent them? How will we be able to always stay one step ahead of the threat?

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off
Peters: With our technology we already embody ML. We can do responses called dynamic isolation. A device could be misbehaving and we could change its policy and be able to either restrict what it’s able to communicate with or cut it off altogether until it’s been examined and determined to be safe for the environment.

We can provide a lot of automation, a lot of visibility, and machine-speed reaction in response to threats as they are happening. Malware doesn’t have to get that 20-second head start. We might be able to cut off in 10 seconds and be able to make it a dynamic change to the threat surface.

Gardner: George, what’s in the future that it’s going to allow you to stay always one step ahead of the bad guys? Also, is there is an advantage for organizations doing a lot of desktops-as-a-service (DaaS) or virtual desktops? Do they have an advantage in having that datacenter image of all of the clients?

Think like a bad guy 

Pradel: Oh, yes, definitely. How do we stay in front of the bad guys? You have to think like the bad guys. And so, one of the things that you want to do is reduce your attack surface. That’s a big part of it, and that’s why the technology that we use to analyze the backups, looking for malware, uses 100 different types of objects of entropy.

As we’re doing ML of that data, of what’s normal what’s not normal, we can figure out exactly where the issues are to stay ahead of them.

Now an air gap on its own surface is extremely secure because it keeps that data in an environment where no one can get at it. We have situations where Unisys Stealth helped with closing the air gap situation where a particular general might have three different networks that they need to connect to and Stealth is a fantastic solution for that.

If you’re doing DaaS, there are ways that it can help. We’re always looking at where the data resides, and most of the time in those situations the data is going to reside back at the corporate infrastructure. That’s a very easy place to be able to protect data. When the data is out on laptops and things like that, then it makes it a little bit more difficult, not impossible, but you have a lot of different end points that you’re pulling from. To be able to bring the system back up -- if you’re using virtual desktops, that kind of thing, actually it’s pretty straightforward to be able to do that because that environment, chances are they’re not going to bring down the virtual desktop environment, they’re going to encrypt the data.
Learn More About Dell EMC PowerProtect
Cyber Recovery Solution
Now, that said, one of the things when we’re having these conversations, it’s not as straightforward of a conversation as ever. We talk about how long you might be out of business depending upon what you’ve implemented. We have to engineer for all the different types of malware attacks. And what’s the common denominator? It’s the data and keeping that data safe, keeping that data so it can’t be deleted.

We have a retention lock capability so you can lock that up for as many as 70 years and it takes two administrators to unlock it. That’s the kind of thing that makes it robust.

In the old days, we would do a WORM drive and copy stuff off to a CD to make something immutable. This is a great way to do it. And that’s one way to stay ahead of the bad guys as best as we can.

Gardner: I’m afraid we’ll have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on how data from one end of its lifecycle to the other needs protection and a means for rapid recovery.

And we’ve learned how a solution from Dell EMC and Unisys helps protect storage including backup data and further assists companies in making themselves whole again after an attack -- when they’ve taken the proper precautions.

Please join me in thanking our guests, Andrew Peters, Stealth Industry Director at Unisys. Thank you, Andrew.

Peters: Thank you.

Gardner: And George Pradel, Senior Systems Engineer at Dell EMC. Thank you so much, George.


Pradel: Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Data Security Insights Discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Unisys-sponsored BriefingsDirect discussions.

Thanks again for listening. Please pass this along to your community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys.

A discussion how backup storage needs to be made safe and secure, especially if companies need to right themselves from an attack. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: