Showing posts with label multicloud. Show all posts
Showing posts with label multicloud. Show all posts

Monday, July 22, 2019

How Total Deployment Intelligence Overcomes the Growing Complexity of Multicloud Management

https://www.hpe.com/us/en/home.html

A discussion on how new tools, processes, and methods bring insights and actionable analysis to help regain control over hybrid cloud and multicloud sprawl.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest in IT innovation.

Gardner
Our next discussion focuses on the growing complexity around multicloud management. We will now explore how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.

Stay with us as we hear how new tools, processes, and methods are bringing insights and actionable analysis to help regain control over hybrid cloud and multicloud sprawl.

Here to help us explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). Welcome to BriefingsDirect, Harsh.

Harsh Singh: Thanks a lot, Dana. I’m happy to be here.


Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?

Singh
Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.

Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”

And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”

From cloud sprawl to cloud smart

For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.

So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?
How to Determine
Ideal Workload Placement
As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”

Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.

https://www.hpe.com/us/en/home.html
Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people -- and you have the right processes in place -- it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.

I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.

Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.

Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.
You must deploy the applications to the right places that are right for your business -- whether it's multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.

It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.

Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?

Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business -- whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.

You may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same -- you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.

Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.

The economics issue -- about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.

Economics elevates visibility

Singh: I am laughing right now because I have talked to so many customers about this.  A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.

“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs -- I don’t know how to get a handle on it.”

So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.

https://www.hpe.com/us/en/home.html
A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.

Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.

So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?

Guardrails reduce risk, increase innovation

Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control -- and get away from the Wild West.

Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.

Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?
How to Transform
The Traditional Datacenter
Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.

You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need -- but within the guardrails, compliance, and the business requirements that IT has.

Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation -- have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?

Automate, yet hold hands, too

Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.


Gardner: Unfortunately, these new requirements come at a time when cloud management skills -- of understanding data and ops, IT and ops, and cloud and ops -- are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.

Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud -- or any other cloud environment -- is automated.

That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.
Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.

You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience -- versus worrying about the details of how these things work.

Gardner: Let’s get back to the problem of multicloud management. Why can't you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?

Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?

Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.

The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.

https://www.hpe.com/us/en/home.html

If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.

Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?

Tying worlds together

Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.

At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.

Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance -- look at whether they are HIPAA compliant or not, PCI compliant or not -- and get access to resources that allow them to begin to manage these environments.
How to Better Manage
Hybrid and Multicloud Economics
For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other things that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.

And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.

Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.

Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?

Optimizing for the future

Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.

https://www.hpe.com/us/en/home.html

You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.

And then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.

Think about it this way -- you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.

Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?

Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.
We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.

We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.

At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.
How to Solve Cost and Utilization
Challenges of Hybrid Cloud
Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level -- what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?

Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?

Watch your clouds

Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.

However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.

So first and foremost, get an idea of what you want to do and where you are today -- get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process -- more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.

And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.

Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?

Singh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.

Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring the growing complexity around multicloud management and how greater accountability is needed around costs and business impacts due to what is all too often haphazard cloud adoption.
How to Achieve Composability
Across Your Datacenter
And we have learned about new tools, processes, and methods that bring additional insights, visibility, and ultimately actionable analysis to help regain control over multicloud sprawl.

So please join me in thanking our guest, Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at HPE. Thank you, Harsh.

Singh: Thank you very much, Dana. It was a pleasure.


Gardner: And a big thank you as well to our audience for joining us for this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.

Thanks again for listening! Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on how new tools, processes, and methods bring insights and actionable analysis to help regain control over hybrid cloud and multicloud sprawl. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Wednesday, December 12, 2018

Inside Story: How HP Inc. Moved from a Rigid Legacy to Data Center Transformation

Transcript of a discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories.

Gardner
Our next data center architecture modernization journey interview explores how HP Inc. (HPI) has rapidly separated and modernized a set of data centers as part of its splitting off from what has become Hewlett Packard Enterprise (HPE).

We will now learn how HP Inc. has taken four shared data centers and transitioned to two agile ones, with higher performance, lower costs, and an obsolescence-resistant and strategic infrastructure design.

Here to help us define the data center of the future are Sharon Bottome, Vice President and Head of Infrastructure Services at HPI, and Piyush Agarwal, Senior Director of Infrastructure Services, also at HPI. Welcome to you both.

Piyush Agarwal: Thank you.

Sharon Bottome: Thank you.

Gardner: We all know the story of HP Inc. splitting off into a separate company from HPE in 2015. Yet, it remains unusual. Most IT modernization efforts combine -- or at the least replicate -- data centers. You had to split off and modernize your massive infrastructures at the same time, and you are still in the process of doing that.

Sharon, what have been the guiding principles as you created new IT choices from a combined corporate legacy?

Split-second reaction 

Bottome: When the split happened, leadership had to make a lot of decisions around speed and agility just to get the split done. A new underlying IT infrastructure wasn’t necessarily the key decision maker for how the split went.

Bottome
We therefore ended up on shared infrastructure in four data centers, which then ended up being shared again as HPE split off assets to Micro Focus and DXC Technology in 2017. We ended up in a situation of having four data centers with shared infrastructure across four newly separated companies.

As you can imagine, we have a different imperative now that we are a new and separate company. HPI is very aggressive and wants to be very fast and agile. So we really need to continue and finish what was an initial separation of all of the infrastructure.

Gardner: Is it fair to say, Piyush, that this has been an unprecedented affair at such scale and complexity?

Agarwal: Yes, that is true. If you look at what some of the other organizations and companies have done, there have been a $5 billion and $10 billion company that have undertaken such data center transformations. But the old Hewlett-Packard as a joint company was a $100 billion company, so separating the data centers for a $100 billion company is a huge effort.

So, yes, companies have done this in the past, but the amount of time they had -- versus the amount of time we are seeking to do the separation makes this totally unthinkable. We are still on that journey.


Gardner: What is new in 2018 IT that allows you to more aggressively go at something like this? What has helped you to do this that was not available just a few years ago?

Bottome: First, the driver for us is we really want to be independent. We want to truly transform our services. That means it's much more about the experiences -- and not just the technology.

We have embarked predominantly on HPE gear. We architected the new data centers using the newest technologies, whether it’s 3PAR, HPE Synergy, and some of the other hardware. That allows us to take about 800 applications and 22,000 operating systems instances and migrate those. It's just a huge undertaking.
Learn How the Future
of Hybrid IT
Can Be Made Simple
But by using a lot of the new technology and hardware, we have to then transform our own processes and all the services to go along with that.

Gardner: Piyush, what have you learned in terms of the underlying architecture? One of my favorite sayings is, “Architecture is destiny.” If you make the right architecture decisions, many other things then fall into place.

What have you done on an architectural level that's allowed this to go more smoothly?

Simpler separation solutions

Agarwal: It’s more about a philosophy than just an architecture, in my view. It goes to the previous question you asked. Why is it simpler now? Just after the separation, there was a philosophy around going to public cloud. Everybody thought that we would save a lot of money by just going to the public cloud.

But in the last two or three years, we realized that the total cost of ownership (TCO) in a public cloud – especially if the applications are not architected for public cloud – means we are not going to save much. So based on that that epiphany, we said, “Hey, is it the right time to look at our enterprise data center and architect it in such a way that it provides cloud-like functionality and still offers flexibility in terms of how much we pay?”

Having HPE Synergy as the underlying composable infrastructure really helps with all of that. Obviously, the newer software-defined data center (SDDC) architectures are also playing a major role. So now, where the application is hosted is less of a concern, because -- thanks to the software-defined architecture and best-fit model -- we may be able to move the workloads around over time.

Gardner: Where you are on this journey? How does that extend around the world?

Multicloud, multinational

Bottome: We are going from four data centers in Texas -- two in Austin and two in Houston – down to two, one each in Houston and Plano. We are deploying those two with full resiliency, redundancy, and disaster recovery.

Gardner: And how does that play into your global reach? How are you using hybrid IT to bring these applications to your global workforce?

Bottome: Anyone who says they are not in a multicloud environment is certainly fooling themselves. We basically are already in a multicloud environment. We have many, many platforms in other people’s clouds in addition to our core data centers. We also have, obviously, our customer resource management (CRM) as a cloud service, and we are moving our enterprise resource planning (ERP) into another cloud.

How do we support all of these cloud environments? We have partners along with us. We are very much out-sourced, too.
So it's a multicloud environment and managing that and changing operations to be able to support that is one of the things we are doing with this transformation. How do we support all of these cloud environments? We have partners along with us. We are using managed service providers (MSPs). We are very much outsourced, too. So it's a journey with them on learning how to have them all supported across all of these multiple clouds.

Ticketing transformed

Gardner: You mentioned management as being so important. Piyush, when it comes to some of the newer management capabilities we are hearing about – such as HPE OneSphere -- what have you learned along the journey so far? Do both HPE OneView and HPE OneSphere play a role as a continuum?

Agarwal: It’s difficult to get into the technology of OneView versus OneSphere. But the predictive analytics that every provider uses to support us is remarkably different, even in just the last five years.

When we were going through this request for proposal (RFP) process for MSPs for our new data center transformation and services, every provider was showing us the software and intelligence on how tickets can be closed -- even before the tickets are generated.

So that’s a huge leap from what we saw four or five years ago. Back then the cost of play was about being in a low-cost location because employee costs were 80 percent of the total. But new automation and intelligence into the ticketing systems is a way to move forward. That’s what will drive the service efficiencies and cost reductions.


Gardner: Sharon, as you continue on your transformation journey, are you able to do more for less?

Bottome: This is actually a great success story for us. In the new data center transformation and the services transformation RFP that Piyush was mentioning, we actually are getting $50 million a year in savings every year over five years. That’s allowed us, obviously, to reinvest that money in other areas. So, yes, it's been a great success story.

We are transforming a lot of the services -- not just in the data center. It's also about how our user base will experience interacting with IT as we move to more of these management platforms with this transformation.

Gardner: How will this all help your IT operations people to be more efficient?

IT our way, with our employees 

Agarwal: When we talk about IT services, there is always a pendulum. If you go back 15 or 20 years, there used to be articles about how Solectron moved all of their IT to IBM. In 2001, there were so many of those kinds of deals.

But within one to two years people realized how difficult it was. The success of the businesses depended not just on IT outsourcing, but in keeping the critical talent to manage the business expectations and manage the service providers.

Where we are now with HPI, over the period of the last three years, we have learned how to work in a managed services environment. What that means is how to get the best out of a supplier but still maintain the critical knowledge of the environment within our own IT.
Learn How the Future
of Hybrid IT
Can Be Made Simple
Our own employees can therefore run the IT tomorrow on some other service provider, if we so choose. It maintains the healthy mix of relationships between the suppliers and our employees. So, we haven’t gone too far right or too far left in terms of how the IT should be run from a service provider perspective.

With this transformation, that thought process was reinforced. We realized when we began this transformation process that we didn’t yet have critical mass to run our IT services internally. Over the period of the last one-and-a-half years, we have gained that critical mass back.

From an HPI IT operations team’s perspective, it generates confidence back -- versus having a victim mentality of, “Oh, it’s a supplier and the suppliers are going to do it,” that is opposed to having the confidence ourselves to deliver on that accountability with our own IT employees. They are the ones driving our supplier to do the transformation, and to do the operations afterward.

Gardner: We have also seen an increase in automation, orchestration, and some very powerful tools, many of them data-driven. How have automation techniques helped you in this process of regaining and keeping control?

Automation advantages 

Agarwal: DevOps provides, on the one hand, the overall infrastructure, orchestration, and agility to provision. Being part of the previous Hewlett Packard Company, we always had the latest and greatest of those tools. We were a testing ground for those tools. We always relied on automated ways of provisioning, and for quick provisioning.

If I look at that from a transformation perspective, we will continue to use those orchestration and provisioning tools. Our internal cloud is heavily reliant on such cloud service automation (CSA). For other technologies, we rely on server automation for all of the Linux and Unix platforms. We always have that mix of quick provisioning.

At the same time, we will continue to encourage our developers to encompass these infrastructure technologies in their DevOps models. We are not there yet, where the application tier integrates with the infrastructure tier to provide a true DevOps model, but I think we are going to see it in the next one to two years.

Gardner: Is there a rationalization process for your data? What’s the underlying data transformation story that’s a subset of the general data center modernization story?

Application rationalization remains an ongoing exercise for us. In a true sense, we had 1,200 applications. We are bringing that down to 800. The application and data center transformations are going in parallel.
Agarwal: Our CIO was considered one of the most transformative in 2015. There is a Forbes article on it. As part of 2015 separation, we undertook a couple of transformation journeys. The data center transformation was one, but the other one was the application transformation. Sharon mentioned that for our CRM application, we moved to Microsoft Dynamics. We are consolidating our ERP.

Application rationalization (AR) remains an ongoing exercise for us. In a true sense, we had 1,200 to 1,300 applications. We are trying to bring that down to 800. Then, there is a further reduction plan over the next two to three years. Certainly the application and data center transformations are going in parallel.

But from a data perspective -- looking at data in general or of having data totally segregated from the applications layer -- I don’t think we are doing that yet.

Where we are in the overall journey of applications transformation, with the number of applications we have, in my view, the data and segregation of applications is at a much higher level of efficiency. Once we have data center transformation and consolidated applications and reduce those by as many as possible, then we will take a look at segregating the data layer from the applications layer.

Gardner: When you do this all properly, what other paybacks do you get? What have been some of the unexpected benefits?

Getting engaged 

Bottome: We received great financial benefits, as I mentioned. But some of the other areas include the end-user experience. Whether it’s time-to-fix by improving the experience of our employees interacting with IT support, we’re seeing efficiencies there with automation. And we are going to bring a lot more efficiency to our own teams.

And one of the measurements that we have internally is an employee satisfaction measure. I found this to be very interesting. For the infrastructure organization, the IT internal personnel, their engagement score went up 40 points from before we started this transformation. You could see that not only are they getting rescaled or retooled, we make sure we have enough of that expertise in-house, and their engagement scores went up right along with that. It helped us on keeping our employees very motivated and engaged.

Gardner: People like to work with modern technology more than the old stuff, is that not true?

Agarwal: Yes, for sure. I want to work with the iPhone X not iPhone 7.

Gardner: What have you learned that you could impart to others? Now, not many others are going to be doing this reverse separation, modernization, consolidation, application, rationalization process at the same time -- while keeping the companies operating.

But what would you tell other people who are going about application and data center modernization?

Prioritize your partners

Bottome: Pick your partner carefully. Picking the right partner is very, very important, not only the technology partner but any of the other partners along the journey with you, be it application migration or your services partners. Our services partner is DXC. And the majority of the data center is built on HPE gear, along with Arista and Brocade.

Also, make sure that you truly understand all of the other transformations that get impacted by the transformation you’re on. In all honesty, I’ve had some bumps along the way because there was so much transformation going on at once. Make sure those dependencies are fully understood.

Gardner: Piyush, what have you learned that you would impart to others?

Agarwal: It goes back to one of the earlier questions. Understand the business drivers in addition to picking your partners. Know your own level of strength at that point in time.
Learn How the Future
of Hybrid IT
Can Be Made Simple
If we had done this a year and a half ago, the confidence level and our own capability to do it would have been different. So, picking your partner and having confidence in your own abilities are both very important.

Bottome: Thank you, Dana. It was exciting to talk about something that has been a lot of work but also a lot of satisfaction and an exciting journey.

Gardner: I’m afraid we’ll have to leave it there. You’ve been exploring with us how HP Inc. has taken four shared data centers and is transitioning to two more agile ones, with higher performance and lower cost. And we’ve learned how the multicloud data center of the future approach provides such benefits as a strategic and obsolescence-resistant design, DevOps benefits, and pushes people to do more with less with their applications.

Please join me in thanking our guests, Sharon Bottome, Vice President and Head of Infrastructure Services at HPI. Thank you so much, Sharon.


Bottome: Thank you.

Gardner: And Piyush Agarwal, Senior Director of Infrastructure Services, also at HPI. Thank you, sir.

Agarwal: Thank you for having us.

Gardner: And a big thank you as well to our audience for joining us for this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett-Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your own IT community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in: