Wednesday, October 23, 2019

How Unisys and Microsoft Team Up to Ease Complex Cloud Adoption for Governments and Enterprises

https://www.unisys.com/offerings/cloud-and-infrastructure-services

A discussion how public and private sector IT organizations can ease cloud adoption using cloud-native apps, services modernization, automation, and embedded best practices.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys and Microsoft.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions and you’re listening to BriefingsDirect.

Gardner
The path to cloud computing adoption appears complex and risky to both government and enterprise IT leaders, recent surveys show. This next BriefingsDirect managed cloud methodologies discussion explores how tackling complexity and security requirements upfront helps ease adoption of cloud architectures.

By combining managed services, security solutions, and hybrid cloud standardization, both public and private sector organizations are now making the cloud journey a steppingstone to larger business transformation success.

Stay with us now as we discover how cloud-native apps and services modernization benefit from prebuilt solutions with embedded best practices and automation.


To learn more, please join me now in welcoming our guests, Raj Raman, Chief Technology Officer (CTO) for Cloud at Unisys. Welcome, Raj.

Raj Raman: Thank you very much, Dana.

Gardner: We’re also here with Jerry Rhoads, Cloud Solutions Architect at Microsoft. Welcome, Jerry.

Jerry Rhoads: Thank you, Dana. My pleasure to be here.

Gardner: Raj, why are we still managing cloud adoption expectations around complexity and security? Why has it taken so long to make the path to cloud more smooth -- and even routine?

Raman: Well, Dana, I spend quite a bit of time with our customers. A common theme we see -- be it a government agency or a commercial customer – is that many of them are driven by organizational mandates and getting those organizational mandates in place often proves challenging, more so than what one thinks.

Cloud adoption challenges 

The other part is that while Amazon Web Services (AWS) or Microsoft Azure may be very easy to get on to, the question then becomes how do you scale up? They have to either figure out how to develop in-house capabilities or they look to a partner like Unisys to help them out.

Raman
Cloud security adoption continues to be a challenge because enterprises still try and wish to apply traditional security practices to the cloud. Having a security and risk posture on AWS or Azure means having a good understanding of the shared security model across user level, application and infrastructure layers of the cloud.

And last, but not least, a very clear mandate such as digital transformation or a specific initiative, where there is a core sponsor around it, oftentimes does ease the whole focus on some of these.

These are some of the reasons we see for cloud complexity. The applications transformation can also be quite arduous for many of our clients.

Gardner: Jerry, what are you seeing for helping organizations get cloud-ready? What best practices make for a smoother on-ramp?

Rhoads: One of the best practices beforehand is to determine what your endgame is going to look like. What is your overall cloud strategy going to look like?

Rhoads
Instead of just lifting and shifting a workload, what is the life cycle of that workload going to look like? It means a lot of in-depth planning -- whether it's a government agency or private enterprise. Once we get into the mandates, it's about, “Okay, I need this application that’s running in my on-premises data center to run in the cloud. How do I make it happen? Do I lift it and shift it or do I re-architect it? If so, how do I re-architect for the cloud?”

That’s a big common theme I’m seeing: “How do I re-architect my application to take better advantage of the cloud?”

Gardner: One of the things I have seen is that a lot of organizations do well with their proof of concepts (POCs). They might have multiple POCs in different parts of the organization. But then, getting to standardized comprehensive cloud adoption is a different beast.

Raj, how do you make that leap from spotty cloud adoption, if you will, to more holistic?

One size doesn’t fit all

Raman: We advise customers to try and [avoid] taking it on as a one-size-fits-all. For example, we have one client who is trying – all at once – to lift and shift thousands of applications.

Now, they did a very detailed POC and they got yield from that POC. But when it came to the actual migration and transformation, they were convinced and felt confident that they could take it on and try it en masse, with thousands of applications.

The thing is, in trying to do that, not all applications are one size. One needs a phased approach for doing application discovery and application assessment. Then, based on that, you can determine which applications are well worth the effort [to move to a cloud].

So we recommend to customers that they think of migrations as a phased approach. Be very clear in terms of what you want to accomplish. Start small, gain the confidence, and then have a milestone-based approach of accomplishing it all.
Learn More About 
Unisys CloudForte
Gardner: These mandates are nonetheless coming down from above. For the US federal government, for example, cloud has become increasingly important. We are expecting somewhere in the vicinity of $3.3 billion to be spent for federal cloud in 2021. Upward of 60 percent of federal IT executives are looking to modernization. They have both the cloud and security issues to face. Private sector companies are also seeing mandates to rapidly become cloud-native and cloud-first.

Jerry, when you have that pressure on an IT organization -- but you have to manage the complexity of many different kinds of apps and platforms -- what do you look for from an alliance partner like Unisys to help make those mandates come to fruition?

Rhoads: In working with partners such as Unisys, they know the customer. They are there on the ground with the customer. They know the applications. They hear the customers. They understand the mandates. We also understand the mandates and we have the cloud technology within Azure. Unisys, however, understands how to take our technology and integrate it in with their end customer’s mission.

Gardner: And security is not something you can just bolt on, or think of, after the fact in such migrations. Raj, are we seeing organizations trying to both tackle cloud adoption and improve their security? How do Unisys and Microsoft come together to accomplish those both as a tag team rather than a sequence, or even worse, a failure?

Secure your security strategy

Raman: We recently conducted a survey of our stakeholders, including some of our customers. And to no surprise security -- be it as part of the migrations or in scaling up their current cloud initiatives – is by far a top area of focus and concern.

We are already partnering with Microsoft and others with our flagship security offering, Unisys Stealth. We are not just in collaboration but leapfrogging in terms of innovation. The Azure cloud team has released a specific API to make products like Stealth available. This now gives customers more choice and it allows Unisys to help meet customers in terms of where they are.

Also, earlier this year we worked very closely with the Microsoft cloud team to release Unisys CloudForte for Azure. These are foundational elements that help both governments as well as commercial customers leverage Azure as a platform for doing their digital transformation.
The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure, as well as help customers understand how they can better consume Azure services.

Those are very specific examples in which we see the Unisys collaboration with Microsoft scaling really well.

Gardner: Jerry, it is, of course, about more than just the technology. These are, after all, business services. So whether a public or private organization is making the change to an operations model -- paying as you consume and budgeting differently -- financially you need to measure and manage cloud services differently.

How is that working out? Why is this a team sport when it comes to adopting cloud services as well as changing the culture of how cloud-based business services are consumed?

Keep pay-as-you-go under control 

Rhoads: One of the biggest challenges I hear from our customers is around going from a CAPEX model to an OPEX model. They don’t really understand how it works.

CAPEX is a longtime standard -- here is the price and here is how long it is good for until you have to then re-up and buy new piece of hardware or re-up the license, or whatnot. Using cloud, it’s pay-as-you-go.

If I launch 400 servers for an hour, I’m paying for 400 virtual machines running for one hour. So if we don’t have a governance strategy in place to stop something like that, we can wind up going through one year's worth of budget in 30 days -- if it's not governed, if it's not watched.

And that's why, for instance, working with Unisys CloudForte there are built-in controls to where you can go through and ping the Azure cloud backend -- such as Azure Cost Management or our Cloudyn product -- where you can see how much your current charges are, as well as forecast what those charges are going to look like. Then you can get ahead of the eight ball, if you will, to make sure that you are actually burning through your budget correctly -- versus getting a surprise at the end of the month.

Gardner: Raj, how should organizations better manage that cultural shift around cloud consumption governance?

Raman: Adding to Jerry’s point, we see three dimensions to help customers. One is what Unisys calls setting up a clear cloud architecture, the foundations. We actually have an offering geared around this. And, again, we are collaborating with Microsoft on how to codify those best practices.

In going to the cloud, we see five pillars that customers have to contend with: cost, security, performance, availability, and operations. Each of these can be quite complex and very deep.

https://www.unisys.com/offerings/cloud-and-infrastructure-services

Rather than have customers figure these out themselves, we have combined product and framework. We have codified it, saying, “Here are the top 10 best practices you need to be aware of in terms of cost, security, performance, availability, and operations.”

It makes it very easy for the Unisys consultants, architects, and customers to understand at any given point -- be it pre-migration or post-migration -- that they have clear visibility on where they stand for their review on cost in the cloud.

We are also thinking about security and compliance upfront -- not as an afterthought. Oftentimes customers go deep into the journey and they realize they may not have the controls and the security postures, and the next thing you know they start to lose confidence.

So rather than wait for that, the thinking is we arm them early. We give them the governance and the policies on all things security and compliance. And Azure has very good capabilities toward this.


The third bit, and Jerry touched on this, is overall financial governance. The ability to think about -- not just cost as a matter of spinning a few Azure resources up and down – but in a holistic way, in a governance model. That way you can break it up in terms of analyzed or utilized resources. You can do chargebacks and governance and gain the ability to optimize cost on an ongoing basis.

These are distinctive foundational elements that we are trying to arm customers with and make them a lot more comfortable and gain the trust as well as the process with cloud adoption.

Gardner: The good news about cloud offerings like Azure and hybrid cloud offerings like Azure Stack is you gain a standardized approach. Not necessarily one-size-fits-all, but an important methodological and technical consistency. Yet organizations are often coming from a unique legacy, with years and years of everything from mainframes to n-tier architectures, and applications that come and go.

How do Unisys and Microsoft work together to make the best of standardization for cloud, but also recognize specific needs that each organization has?

Different clouds, same experience

Rhoads: We have Azure Stack for on-premise Azure deployments. We also have Azure Commercial Cloud as well as Azure Government Cloud and Department of Defense (DoD) Cloud. The good news is that they use the same portal, same APIs, same tooling, and same products and services across all three clouds.

Now, as services roll out, they roll out in our Commercial Cloud first, and then we will roll them out into Azure Government as well as into Azure Stack. But, again, the good news is these products are available, and you don’t have to do any special configuration or anything in the backend to make it work. It’s the same experience regardless of which product the customer wants to use.
Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customers it's the same cloud services that they expect to use. The difference is just where those cloud services live.

What’s more, Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customer it's the same cloud services that they expect to use. The difference really is just where those cloud services live, whether it's with Azure Stack on-premises, on a cruise ship or in a mine, or if you are going with Azure Commercial Cloud, or if you need a regulated workload such as a FedRAMP high workload or an IC4, IC5 workload, then you would go into Azure Government. But there are no different skills required to use any of those clouds.

Same skill set. You don’t have to do any training, it’s the same products and services. And if the products and services aren't in that region, you can work with Unisys or myself to engage the product teams to put those products in Azure Stack or in Azure for Government.

Gardner: How does Unisys CloudForte managed services complement these multiple Azure cloud environments and deployment models?

Rhoads: CloudForte really further standardizes it. There are different levels of CloudForte, for instance, and the underlying cloud really doesn’t matter, it’s going to be the same experience to roll that out. But more importantly, CloudForte is really an on-ramp. A lot of times I am working with customers and they are like, “Well, gee, how do I get started?”

Whether it’s setting up that subscription in-tenant, getting them on-board with that, as well as how to roll out that POC, how do they do that, and that’s where we leverage Unisys and CloudForte as the on-ramp to roll out that first POC. And that’s whether that POC is a bare-bones Azure virtual network or if they are looking to roll out a complete soup-to-nuts application with application services wrapped around it. CloudForte and Unisys can provide that functionality.

Do it your way, with support 

Raman: Unisys CloudForte has been designed as an offering on top of Azure. There are two key themes. One, meet customers where they are. It's not about what Unisys is trying to do or what Azure is trying to do. It's about, first and foremost, being customer obsessed. We want to help customers do things on their terms and do it the right way.

So CloudForte has been designed to meet those twin objectives. The way we go about doing it is -- imagine, if you will, a flywheel. The flywheel has four parts. One, the whole consumption part, which is the ability to consume Azure workloads at any given point.
Learn More About 
Unisys CloudForte
Next is the ability to run commands, or the operations piece. Then you follow that up with the ability to accelerate transformations, so data migrations or app modernization.

Last, but not least, is to transform the business itself, be it on a new technology, artificial intelligence (AI), machine learning (ML), blockchain, or anything that can wrap on top of Azure cloud services.

The beauty of the model is a customer does not have to buy all of these en masse; they could be fitting into any of this. Some customers come and say, “Hey, we just want to consume the cloud workloads, we really don’t want to do the whole transformation piece.” Or some customers say, “Thank you very much, we already have the basic consumption model outlined. But can you help us accelerate and transform?”

So the ability to provide flexibility on top of Azure helps us to meet customers where they are. That’s the way CloudForte has been envisioned, and a key part of why we are so passionate and bullish in working with Microsoft to help customers meet their goals.

Gardner: We have talked about technology, we have talked about process, but of course people and human capital and resources of talent and skills are super important as well. So Raj, what does the alliance between Unisys and Microsoft do to help usher people from being in traditional IT to be more cloud-native practitioners? What are we doing about the people element here?

Expert assistance available

Raman: In order to be successful, one of the big focus areas with Unisys is to arm and equip our own people, be it at the consulting level, a sales-facing level, either doing cloud architectures or even doing cloud delivery, across the stripe, rank and file. There is an absolute mandate to increase the number of certifications, especially the Azure certifications.

In fact, I can also share that Unisys, as we speak, every month we have a doubling of people who are across the rank of Azure 300 and the 900. These are the two popular certifications, the whole Azure stack of it. We have now north of 300 trained people, and maybe my number is at the lower end. We expect the number to double.

https://www.unisys.com/offerings/cloud-and-infrastructure-services
So we have absolute commitment, because customers look to us to not only come in and solve the problems, but to do it with the level of expertise that we claim. So that’s why our commitment to getting our people trained and certified on Azure is a very important piece of it.

Gardner: One of the things I love to do is to not just tell, but to show. Do we have examples of where the Unisys and Microsoft alliance -- your approach and methodologies to cloud adoption, tackling the complexity, addressing the security, and looking at both the unique aspect of each enterprise and their skills or people issues -- comes all together? Do you have some examples?

Raman: The California State University is a longstanding customer of ours, a good example where they have transformed their own university infrastructure using Unisys CloudForte with a specific focus on all things hybrid cloud. We are pleased to see that not only is the customer happy but they are quite eager to get back to us in terms of making sure that their mandates are met on a consistent basis.

Our federal agencies are usually reluctant to be in the spotlight. That said, what I can share are representative examples. We have some very large defense establishments working with us. We have some state agencies close to the Washington, DC area, agencies responsible for the roll-out of cloud consumption across the mandates.

We are well on our way in not only working with the Microsoft Azure cloud teams, but also with state agencies. Each of these agencies is citywide or region-wide, and within that they have a health agency or an agency focused on education or social services.

In our experience, we are seeing an absolute interest in adopting the public clouds for them to achieve their citizens’ mandates. So those are some very specific examples.

Gardner: Jerry, when we look to both public and private sector organizations, how do you know when you are doing cloud adoption right? Are there certain things you should look to, that you should measure? Obviously you would want to see that your budgets are moving from traditional IT spending to cloud consumption. But what are the metrics that you look to?

The measure of success 

Rhoads: One of the metrics that I look at is cost. You may do a lift and shift and maybe you are a little bullish when you start building out your environments. When you are doing cloud adoption right, you should see your costs start to go down.

https://azure.microsoft.com/en-us/
So your consumption will go up, but your costs will go down, and that’s because you are taking advantage of either platform as a service (PaaS) in the cloud, and being able to auto-scale out, or looking to move to say Kubernetes and start using things like Docker containers and shutting down those big virtual machines (VMs), and clusters of VMs, and then running your Kubernetes services on top of them.

When you see those costs go down and your services going up, that’s usually a good indicator that you are doing it right.

Gardner: Just as a quick aside, Jerry, we have also seen that Microsoft Azure is becoming very container- and Kubernetes-oriented, is that true?

Rhoads: Yes, it is. We actually have Brendan Burns, as a matter of fact, who was one of the co-creators of Kubernetes during his time at Google.

Gardner: Raj, how do you know when you are getting this right? What do you look to as chief metrics from Unisys's perspective when somebody has gone beyond proof of concept and they are really into a maturity model around cloud adoption?

Raman: One of the things we take very seriously is our mandate to customers to do cloud on your terms and do it right. And what we mean by that is something very specific, so I will break it in two.

One is from a customer-led metric perspective. We actually rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry relative to the rest of our competitions. And that's something that's hard-earned, but we keep striving to raise the bar on how our customers talk to each other and how they feel about us.

The other part is the ability to retain customers, so retention. So those are two very specific customer-focused benchmarks.

Now, building upon some of the examples that Jerry was talking about, from a cloud metric perspective, besides cost and besides cost optimization, we also look at some very specific metrics, such as how many net-net workloads are there under management. What are some of the net new services that are being launched? We especially are quite curious to see if there is a focus in terms of Kubernetes or AI and ML adoption, are there any trends toward that?
We rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry, but we keep striving to raise the bar on how our customers talk to each other and feel about us.

One of the very interesting ones that I will share, Dana, is that some of our customers are starting to come and ask us, “Can you help set up an Azure Cloud center of excellence within our organization?” So that oftentimes is a good indicator that the customer is looking to transform the business beyond the initial workload movement.

And last, but not the least, is training, and absolute commitment to getting their own organization to become more cloud-native.

Gardner: I will toss another one in, and I know it's hard to get organizations to talk about it, but fewer security breaches, fewer days or instances of downtime because of a ransomware attack. So it's hard to get people to talk about it if you can't always prove when you don’t get attacked, but certainly a better security posture as compared to two years, three years ago would be a high indicator on my map as to whether cloud is being successful for you.

All right, we are almost out of time, so let's look to the future. What comes next when we get to a maturity model, when organizations are comprehensive, standardized around cloud, have skills and culture oriented to the cloud regardless of their past history? We are also of course seeing more use of the data behind the cloud, in operations and using ML and AI to gain AIOps benefits.

Where can we look to even larger improvements when we employ and use all that data that’s now being generated within those cloud services?

Continuous cloud propels the future 

Raman: One of the things that’s very evident to us is, as customers start to come to us and use the cloud at significant scale, is it is very hard for any one organization. Even for Unisys, we see this, which is how do you get scaled up and keep up with the rate of change that the cloud platform vendors such as Azure are bringing to the table; all good innovations, but how do you keep on top of that?

So that’s where a focus on what we are calling as “AI-led operations” is becoming very important for us. It’s about the ability to go and look at the operational data and have these customers go from a reactive, from a hindsight-led model, to a more proactive and a foresight-driven model, which can then guide, not only their cloud operations, but also help them think about where they can now leverage this data and use that Azure infrastructure to then launch more innovation or new business mandates. That’s where the AIOps piece, the AI-led operations piece, of it kicks in.
Learn More About 
Unisys CloudForte
There is a reason why cloud is called continuous. You gain the ability to have continuous visibility via compliance or security, to have constant optimization, both in terms of best practices, reviewing the cloud workloads on a constant basis and making sure that their architectures are being assessed for the right way of doing Azure best practices.

And then last, but not the least, one other trend I would surface up, Dana, as a part of this, which is we are starting to see an increase in the level of conversational bots. Many customers are interested in getting to a self-service mode. That’s where we see conversational bots built on Azure or Cortana will become more mainstream.

Gardner: Jerry, how do organizations recognize that the more cloud adoption they have, the more standardization, the more benefits they will get in terms of AIOps and a virtuous adaption pattern kicks in?

Rhoads: To expand on what Raj talked about with AIOps, we actually have built in a lot of AI into our products and services. One of them is with Advanced Threat Protection (ATP) on Azure. Another one is with anti-phishing mechanisms that are deployed in Office 365.

So as more folks move into the cloud, we are seeing a lot of adoption around these products and services, as well as we are also able to bring in a lot of feedback and do a lot of learning off of some of the behaviors that we are seeing to make the products even better.

DevOps integrated in the cloud 

So one of things that I do, in working with my customers is DevOps, how do we employ DevOps in the cloud? So a lot of folks are doing DevOps on-premises and they are doing it from an application point of view. I am rolling out my application on an infrastructure that is either virtualized or physical, sitting in my data center, how do I do that in the cloud, why do I do that in the cloud?

Well, in the cloud everything is software, including infrastructure. Yes, it sits on a server at the end of the day; however, it is software-defined, being it is software-defined, it has an API, I can write code. So therefore if I want to blow out or roll out a suite of VMs or I want to roll out Kubernetes clusters and put my application on top of it, I can create definable, repeatable code, if you will, that I can check into a repository someplace, press the button, and roll out that infrastructure and put my application on top of it.

So now deploying applications, especially with DevOps in the cloud, it's not about I have an operations team and then I have my DevOps team that rolls out the application on top of existing infrastructure. Instead I actually bundle it altogether. I have tighter integration, which means I now have repeatable deployments and I can do my deployments, instead of doing them every quarter or annually, I can do deployments -- I can do 20, 30, 1,000 a day if I like, if I do it right.

Gardner: I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on improving the path to rapid and routine cloud computing adoption. And we have learned how an alliance between Unisys and Microsoft combines managed services, security solutions and hybrid cloud standardization to usher both public and private sector organizations to larger business transformation goals.


So please join me in thanking our guests, Raj Raman, CTO for Cloud at Unisys. Thank you, Raj.

Raman: Thank you, Dana.

Gardner: We have also been joined by Jerry Rhoads, Cloud Solutions Architect at Microsoft. Thank you, Jerry

Rhoads: Thank you, Dana.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect cloud computing adoption methodologies discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Unisys- and Microsoft-sponsored BriefingsDirect discussions.

Thanks again for listening. Please pass this along to your community -- and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys and Microsoft.

A discussion how cloud-native apps and services modernization benefits from prebuilt solutions with embedded best practices and automation to ease cloud adoption. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Friday, October 11, 2019

How Containers are Becoming the New Basic Currency for Pay as You Go Hybrid IT

https://www.cio.com/article/3434010/more-enterprises-are-using-containers-here-s-why.html

A discussion on how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into container disruption trends and optimal use strategies.

Gardner
Container-based deployment models have rapidly gained popularity across a full spectrum of hybrid IT architectures -- from edge, to cloud, to data center. IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.

Stay with us as we examine innovative container use cases and the escalating benefits that come from broad container use with Robert Christiansen, Evangelist in the Office of the Chief Technology Officer at Hewlett Packard Enterprise (HPE). Welcome back, Robert.

Robert Christiansen: Thank you for having me, Dana.


Gardner: Containers are being used in more ways and in more places. What was not that long ago a fringe favorite for certain developer use cases is becoming broadly disruptive. How disruptive has the container model become?

Christiansen: Well, it’s the new change in the paradigm. We are looking to accelerate software releases. This is the Agile motion. At the end of the day, software is consuming the world, and if you don’t release software quicker -- with more features more often -- you are being left behind.

Christiansen
The mechanism to do that is to break them out into smaller consumable segments that we can manage. Typically that motion has been adopted at the public cloud level on containers, and that is spreading into the broader ecosystem of the enterprises and private data centers. That is the fundamental piece -- and containers have that purpose.

Gardner: Robert, users are interested in that development and deployment advantage, but are there also operational advantages, particularly in terms of being able to move workloads more freely across multiple clouds and hybrid clouds?

Christiansen: Yes, the idea is twofold. First off was to gain agility and motion, and then people started to ask themselves, “Well, I want to have choice, too.” So as we start abstracting away the dependencies of what runs a container, such as a very focused one that might be on a particular cloud provider, I can actually start saying, “Hey, can I write my container platform services and APIs compatible across multiple platforms? How do I go between platforms; how do I go between on-prem or in the public cloud?

Gardner: And because containers can be tailored to specific requirements needed to run a particular service, can they also extend down to the edge and in resource-constrained environments?

Adjustable containers add flexibility 

Christiansen: Yes, and more importantly, they can adjust to sizing issues, too. So think about pushing a container that’s very lightweight into a host that needs to have high availability of compute but may be limited on storage.

There are lots of different use cases. As you collapse the virtualization of an app -- that’s what a container really does, it virtualizes app components, it virtualizes app parts and dependencies. You only deploy the smallest bit of code needed to execute that one thing. That works in niche uses like a hospital, telecommunications on a cell tower, on an automobile, on the manufacturing floor, or if you want to do multiple pieces inside a cloud platform that services a large telco. However you structure it, that’s the type of flexibility containers provide.

Gardner: And we know this is becoming a very popular model, because the likes of VMware, the leader in virtualization, is putting a lot of emphasis on containers. They don’t want to lose their market share, they want to be in the next game, too. And then Microsoft with Azure Stack is now also talking about containers -- more than I would have expected. So that’s a proof-point, when you have VMware and Microsoft agreeing on something.

Christiansen: Yes, that was really interesting actually. I just saw this little blurb that came up in the news about Azure Stack switching over to a container platform, and I went, “Wow!” Didn’t they just put in three- or five-years’ worth of R and D? They are basically saying, “We might be switching this over to another platform.” It’s the right thing to do.
How to Modernize IT Operations
And Accelerate App Performance
With Container Technology
And no one saw Kubernetes coming, or maybe OpenShift did. But the reality now is containers suddenly came out of nowhere. Adoption has been there for a while, but it’s never been adopted like it has been now.

Gardner: And Kubernetes is an important part because it helps to prevent complexity and sprawl from getting out of hand. It allows you to have a view over these different disparate deployment environments. Tell us why Kubernetes is in an accelerant to containers adoption.

Christiansen: Kubernetes fundamentally is an orchestration platform that allows you to take containers and put them in the right place, manage them, shut them down when they are not working or having behavior problems. We need a place to orchestrate, and that’s what Kubernetes is meant to be.

It basically displaced a number of other private, what we call opinionated, orchestrators. There was a number of them out there that were being worked on. And then Google released Kubernetes, which was fundamentally their platform that they had been running their world on for 10 years. They are doing for this ecosystem what the Android system did for cell phones. They were releasing and open sourcing the operating model, which is an interesting move.

Gardner: It’s very rapidly become the orchestration mechanism to rule them all. Has Kubernetes become a de facto industry standard?

Christiansen: It really has. We have not seen a technology platform gain acceptance in the ecosystem as fast as this. I personally in all my years or decades have not seen something come up this fast.

https://www.hpe.com/us/en/resources/storage/trends-persistent-storage-containers.html?parentPage=/us/en/products/storage/containers

Gardner: I agree, and the fact that everyone is all-in is very powerful. How far will this orchestration model will go? Beyond containers, perhaps into other aspects of deployment infrastructure management?

Christiansen: Let’s examine the next step. It could be a code snippet. Or if you are familiar with functions, or with Amazon Web Services (AWS) Lambda [serverless functions], you are talking about that. That would be the next step of how orchestration – it allows anyone to just run code only. I don’t care about a container. I don’t care about the networking. I don’t care about any of that stuff -- just execute my code.

Gardner: So functions-as-a-service (FaaS) and serverless get people used to that idea. Maybe you don’t want to buy into one particular serverless approach versus another, but we are starting to see that idea of much more flexibility in how services can be configured and deployed -- not based on a platform, an OS, or even a cloud.

Containers’ tipping point 

Christiansen: Yes, you nailed it. If you think about the stepping stones to get across this space, it’s a dynamic fluid space. Containers are becoming, I bet, the next level of abstraction and virtualization that’s necessary for the evolution of application development to go faster. That’s a given, I think, right now.

Malcolm Gladwell talked about tipping points. Well, I think we have hit the tipping point on containers. This is going to happen. It may take a while before the ecosystem changes over. If you put the strategy together, if you are a decision-maker, you are making decisions about what to do. Now your container strategy matters. It matters now, not a year from now, not two years from now. It matters now.

Gardner: The flexibility that containers and Kubernetes give us refocuses the emphasis of how to do IT. It means that you are going to be thinking about management and you are going to be thinking about economics and automation. As such, you are thinking at a higher abstraction than just developing and deploying applications and then attaching and integrating data points to them.
Learn More About
Cloud and Container Trends
How does this higher abstraction of managing a hybrid estate benefit organizations when they are released from the earlier dependencies?

Christiansen: That’s a great question. I believe we are moving into a state where you cannot run platforms with manual systems, or ticketing-based systems. That type of thing. You cannot do that, right? We have so many systems and so much interoperability between the systems, that there has to be some sort of anatomic or artificial intelligence (AI)-based platforms that are going to make the workflow move for you.


There will still be someone to make a decision. Let’s say a ticket goes through, and it says, “Hey, there is the problem.” Someone approves it, and then a workflow will naturally happen behind that. These are the evolutions, and containers allow you to continue to remove the pieces that cause you problems.

Right now we have big, hairy IT operations problems. We have a hard time nailing down where they are. The more you can start breaking these apart and start looking at your hotspots of areas that have problems, you can be more specifically focused on solving those. Then you can start using some intelligence behind it, some actual workload intelligence, to make that happen.

Gardner: The good news is we have lots of new flexibility, with microservices, very discrete approaches to combining them into services, workflows, and processes. The bad news is we have all that flexibility across all of those variables.

Auspiciously we are also seeing a lot of interest and emphasis in what’s called AIOps, AI-driven IT operations. How do we now rationalize what containers do, but keep that from getting out of hand? Can we start using programmatic and algorithmic approaches? What you are seeing when we combine AIOps and containers?

Simplify your stack 

Christiansen: That’s like what happens when I mix oranges with apples. It’s kind of an interesting dilemma. But I can see why people want to say, “Hey, how does my container strategy help me manage my asset base? How do I get to a better outcome?”

One reason is because these approaches enable you to collapse the stack. When you take complexity out of your stack -- meaning, what are the layers in your stack that are necessary to operate in a given outcome -- you then have the ability to remove complexity.

We are talking about dropping the containers all the way to bare metal. And if you drop to bare metal, you have taken not only cost out of the equation, you have taken some complexity out of the equation. You have taken operational dependencies out of the equation, and you start reducing those. So that’s number one.

https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html
Number two is you have to have some sort of service mesh across this thing. So with containers comes a whole bunch of little hotspots all over the place and a service manager must know where those little hotspots are. If you don’t have an operating model that’s intelligent enough to know where those are (that’s called a service mesh, where they are connected to all these things) you are not going to have autonomous behaviors over the top of that that will help you.

So yes, you can connect the dots now between your containers to get autonomous, but you have got to have that layer in between that’s telling where the problems are -- and then you have intelligence above that that says how do I handle it.

Gardner: We have been talking, Robert, at an abstract level. Let’s go a bit more to the concrete. Are there use cases examples that HPE is working through with customers that illustrate the points we have been making around containers and Kubernetes?

Practice, not permanence 

Christiansen: I just met with the team, and they are working with a client right now, a very large Fortune 500 company, where they are applying the exact functions that I just talked to you about.

First thing that needed to happen is a development environment where they are actually developing code in a solid continuous integration, continuous development, and DevOps practice. We use the word “practice,” it’s like medicine and law. It’s a practice, nothing is permanent.

So we put that in place for them. The second thing is they’re trying to release code at speed. This is the first goal. Once you start releasing code at speed, with containers as the mechanism by which you are doing it, then you start saying, “Okay, now my platform on which I’m dropping on is going through development, quality assurance, integration, and then finally into production.

By the time you get to production, you need to know how you’re managing your platform. So it’s a client evolution. We are in that process right now -- from end-to-end -- to take one of their applications that is net new and another application that’s a refactor and run them both through this channel.
More Enterprises Are Choosing
Containers -- Here's Why
Now, most clients we engage with are in that early stage. They’re doing proof of concepts. There are a couple of companies out there that have very large Kubernetes installations, that are in production, but they are born-in-the-cloud companies. And those companies have an advantage. They can build that whole thing I just talked about from scratch. But 90 percent of the people out there today, what I call the crown jewels of applications, have to deal with legacy IT. They have to deal with what’s going on today, their data sources have gravity, they still have to deal with that existing infrastructure.

Those are the people I really care about. I want to give them a solution that goes to that agile place. That’s what we’re doing with our clients today, getting them off the ground, getting them into a container environment that works.

Gardner: How can we take legacy applications and deployments and then use containers as a way of giving them new life -- but not lifting and shifting?

Improve past, future investments 

Christiansen: Organizations have to make some key decisions on investment. This is all about where the organization is in its investment lifecycle. Which ones are they going to make bets on, and which ones are they going to build new?

We are involved with clients going through that process. What we say to them is, “Hey, there is a set of applications you are just not going to touch. They are done. The best we can hope for is put the whole darn thing in a container, leave it alone, and then we can manage it better.” That’s about cost, about economics, about performance, that’s it. There are no release cycles, nothing like that.
The best we can hope for is put the whole darn thing in a container and we can manage it better. That's about cost, economics, and performance.

The next set are legacy applications where I can do something to help. Maybe I can take a big, beefy application and break it into four parts, make a service group out of it. That’s called a refactor. That will give them a little bit of agility because they can only release code pieces for each segment.

And then there are the applications that we are going to rewrite. These are dependent on what we call app entanglement. They may have multiple dependencies on other applications to give them data feeds, to give them connections that are probably services. They have API calls, or direct calls right into them that allow them to do this and that. There is all sorts of middleware, and it’s just a gnarly mess.

If you try to move those applications to public cloud and try to refactor them there, you introduce what I call data gravity issues or latency issues. You have to replicate data. Now you have all sorts of cost problems and governance problems. It just doesn’t work.

https://www.hpe.com/us/en/home.html
You have to keep those applications in the datacenters. You have to give them a platform to do it there. And if you can’t give it to them there, you have a real problem. What we try to do is break those applications into part in ways where the teams can work in cloud-native methodologies -- like they are doing in public cloud -- but they are doing it on-premises. That’s the best way to get it done.

Gardner: And so the decision about on-premises or in a cloud, or to what degree a hybrid relationship exists, isn’t so much dependent upon cost or ease of development. We are now rationalizing this on the best way to leverage services, use them together, and in doing so, we attain backward compatibility – and future-proof it, too.

Christiansen: Yes, you are really nailing it, Dana. The key is thinking about where the app appropriately needs to live. And you have laws of physics to deal with, you have legacy issues to deal with, and you have cultural issues to deal with. And then you have all sorts of data, what we call data nationalization. That means dealing with GDPR and where is all of this stuff going to live? And then you have edge issues. And this goes on and on, and on, and on.

So getting that right -- or at least having the flexibility to get it right -- is a super important aspect. It’s not the same for every company.

Gardner: We have been addressing containers mostly through an applications discussion. Is there a parallel discussion about data? Can we begin thinking about data as a service, and not necessarily in a large traditional silo database, but perhaps more granular, more as a call, as an API? What is the data lifecycle and DataOps implications of containers?

Everything as a service 

Christiansen: Well, here is what I call the Achilles heel of the container world. It doesn’t handle persistent data well at all. One of the things that HPE has been focused on is providing stateful, legacy, highly dependent persistent data stores that live in containers. Okay, that is a unique intellectual property that we offer. I think is really groundbreaking for the industry.

Kubernetes is a stateless container platform, which is appropriate for cloud-native microservices and those fast and agile motions. But the legacy IT world in stateful, with highly persistent data stores. They don’t work well in that stateless environment.

https://www.hpe.com/us/en/solutions/bluedata.html
Through the work we’ve been doing over the last several years, specifically with an HPE-acquired company called BlueData, we’ve been able to solve that legacy problem. We put that platform into the AI, machine learning (ML), and big data areas first to really flesh that all out. We are joining those two systems together and offering a platform that is going to be really useful out in marketplace.

Gardner: Another aspect of this is the economics. So one of the big pushes from HPE these days is everything as a service, of being able to consume and pay for things as you want regardless of the deployment model -- whether it’s on premises, hybrid, in public clouds, or multi-clouds. How does the container model we have been describing align well with the idea of as a service from an economic perspective?

Christiansen: As-a-service is really how I want to get my services when I need them. And I only want to pay for what I need at the time I need it. I don’t want to overpay for it when I don’t use it. I don’t want to be stuck without something when I do need it.
Top Trends -- Stateful Apps Are Key
To Enterprise Container Strategy
Solving that problem inside various places in the ecosystem is a different equation, it comes up differently. Some clients want to buy stuff, they want to capitalize it and just put it on the books. So we have to deal with that.

You have other people who say, “Hey, I’m willing to take on this hardware burden as a financer, and you can rent it from me.” You can consume all the pieces you need and then you’ve got the cloud providers as a service. But more importantly, let’s go back to how the containers allow you to have much finer granularity about what it is you’re buying. And if you want to deploy an app, maybe you are paying for that app to be deployed as opposed to the container. But the containers are the encapsulation of it and where you want to have it.

https://www.hpe.com/us/en/services/it-consumption.html

So you still have to get to what I call the basic currency. The basic currency is a container. Where does that container run? It has to run either on premises, in the public cloud, or on the edge. If people are going to agree on that basic currency model, then we can agree on an economic model.

Gardner: Even if people are risk averse, I don’t think they’re in trouble by making some big bets on containers as their new currency and to attain capabilities and skills around both containers and Kubernetes. Recognizing that this is not a big leap of faith, what do you advise people to do right now to get ready?

Christiansen: Get your arms around the Kubernetes installations you already have, because you know they’re happening. This is just like when the public cloud was arriving and there was shadow IT going on. You know it’s happening; you know people are writing code, and they’re pushing it into a Kubernetes cluster. They’re not talking to the central IT people about how to manage or run it -- or even if it’s going to be something they can handle. So you’ve got to get a hold of them first.

Teamwork works 

Go find your hand raisers. That’s what I always say. Who are the volunteers? Who has their hands in the air? Openly say, “Hey, come in. I’m forming a containers, Kubernetes, and new development model team.” Give it a name. Call it the Michael Jordan team of containers. I don’t care. But go get them. Go find out who they are, right?

And then form and coalesce that team around that knowledge base. Learn how they think, and what is the best that’s going on inside of your own culture. This is about culture, culture, culture, right? And do it in public so people can see it. This is why people got such black eyes when they were doing their first stuff around public cloud because they snuck off and did it, and then they were really reluctant not to say anything. Bring it out in the open. Let’s start talking about it.

Next thing is looking for instantiations of applications that you either are going to build net new or you are going to refactor. And then decide on your container strategy around that Kubernetes platform, and then work it as a program. Be open about transparency about what you’re doing. Make sure you’re funded.
If your data lives on-premises and an application is going to need data, you're going to need to have an on-premises solution for containers that can handle legacy and cloud at the same time. If that data goes to the cloud, you can always move the container there, too.

And most importantly, above all things, know where your data lives. If your data lives on-premises and that application you’re talking about is going to need data, you’re going to need to have an on-premises solution for containers, specifically those that handle legacy and public cloud at the same time. If that data decides it needs to go to public cloud, you can always move it there.

Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring containers as a new currency and how that’s a disruptive force as well as an opportunity for improving the way IT operations works. And we’ve learned how IT operators are innovating around more automation and orchestration in order to take full advantage of what containers and Kubernetes offer. We’ve also heard about how intelligence and AIOps can be brought to bear when things start to get scale intensive.

Please join me in thanking our guest, Robert Christensen, Evangelist in the Office of the Chief Technology Officer at HPE. Thank you, Robert.

Christensen: Thank you so much, Dana.


Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect Voice of the Innovator containers trends and use strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: