Wednesday, November 08, 2017

How Cloud Architects Transform the Messy Mix of Hybrid Cloud Factors into a Consistent Force Multiplier

Transcript of a discussion on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next cloud strategies insights interview focuses on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios. We’ll now learn how composable infrastructure and auto-scaling help improve client services, operations, and business goals attainment for a New York cloud support provider.

Here to help us learn what's needed to reach the potential of multiple -- and often overlapping -- cloud models is Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) Ltd. in New York. Welcome, Arthur.

Arthur Reyenger: Thank you so much, I really appreciate being on the show.

Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services? 

Reyenger
Reyenger: Like the technology trends themselves, everything is accelerating. Before, you would have three-year or even five-year plans that were developed by the business. They were designed to reach certain business outcomes, they would design the technology to support that and it was then heads-down to build my rocket ship.

It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.

Gardner: More moving parts, less time.

Reyenger: Absolutely.

Gardner: How do you accomplish that? 

Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.

Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting? 

Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.

On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.

Gardner: Tell us about your company. What have you done to transform?

Get the
Updated Book

Reyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.

It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them. 

Gardner: I suppose there is no typical cloud engagement, but what is a common hurdle that organizations are facing as they go from that traditional IT mindset to the more cloud-centric thinking and hybrid deployment models? 

The cloud answer

Reyenger: The concept of auto-scaling or bursting has become very, very prevalent. You see that within different lines of business. Ultimately, they are all asking for essentially the same thing -- and the cloud is a pretty good answer.

At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.

We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.

And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives. 

Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.

How are you working with companies like Hewlett Packard Enterprise (HPE), for example, to provide composable auto-scaling capabilities with the look and feel of public cloud on their private cloud?

Get the
Updated Book

Reyenger: Now it’s becoming a multi-cloud strategy. It’s one thing to say only on-premises and using one cloud. But using just one cloud has risk, and this is a problem.

We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.

Intelligent platform performance

We looked at some of the very unique things that HPE has done, specifically around their Synergy platform, to allow for cloud management and cloud automation to deliver true composable infrastructure. That has huge value around energizing a company’s goals, strengthening their profitability, boosting productivity, and enhancing innovation. We've been able to extend that into the public cloud. So now we have customers that truly are getting the best of both worlds.
Composable infrastructure is having true infrastructure that you can deploy as code. It’s being able to standardize on a single RESTful API set. 

Gardner: How do you define composable infrastructure? 

Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.

That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.

Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.

We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on. 

Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?  

Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.

But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment. 

Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?

Consult before composing

Reyenger: We offer a series of consulting services around the delivery of managed services and the actual development to take an existing cloud management stack -- whether that is Red Hat CloudForms, vRealize from VMware, or Terraform -- it really doesn't matter.

We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.

Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.

There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.

Get the
Updated Book

Gardner: We are in the early days of composability of infrastructure in a multi-cloud world. But as the complexity and scale increases, it seems likely to me that we are going to need to bring things like machine learning and artificial intelligence (AI) because humans doing this manually will run out of runway.

Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process? 

Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.

And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.
We will see more of that single pane of glass that they will leverage across multiple cloud providers. 

It's really early to ultimately pick favorites, but you are going to see more standardization. Rather than having 50 different RESTful APIs that everyone is standardizing on and that are constantly changing, so that I have to provide custom integrations. What we will see is more of that single pane of glass they will leverage across multiple cloud providers. That will leverage a lot of the same automation and orchestration toolsets that we talked about. 

Gardner: And HPE has their sights set on this with Project New Hybrid IT Stack? 

Reyenger: 100 percent. 

Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes? 

Fulfilling the promise

Reyenger: Every organization is different. I’ve spent a good chunk of my career being tactically deployed within very large organizations that are trying to achieve certain goals.
For me, I like to go to a customer’s 10-K SEC filing and look at the promises they’ve made to their investors. We want to ultimately be able to marry back what this IT investment will do for the short-term goals that they are all being judged against, as well as from both the key performance indicators (KPI) standpoint and from the health of the company.

It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.

These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value. 

Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.
Are we going to see IT costs at an operating level adjusted favorably? Is this something that ultimately will be so optimized -- with higher utilization, leveraging the competitive market for cloud services -- that meaningful decreases will occur in the total operating costs of IT in an organization?

An uphill road to lower IT costs

Reyenger: I definitely think that it’s quite possible. The way that most organizations are set up today, IT operations rolls back into finance. So if you sit underneath the CFO, like most organizations do, and a request gets made by marketing or sales or another line of business -- it has to go up the chain, get translated, and then come back down.

A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.

But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.

Get the
Updated Book

Gardner: Arthur, I have been in this business long enough to know that every time we’ve gotten into the point where we think we are going to meaningfully decrease IT costs, some other new paradigm of IT comes up that requires a whole new round of investment. But it seems to me that this could be different this time, that we actually are getting to a standardized approach for supporting workloads and that traditional economics that impact any procurement service will become in effect here, too.

Mining to minimize risk

Reyenger: Absolutely. One of our big pushes has been around object storage. This still allows for traditional file- and block-level support. We are trying to help customers achieve that new economic view -- of which cloud approach ultimately provides them that best price point, but still gives them low risk, visibility, and control over their data.

I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”

And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service. 

Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.
We help customers gain all the benefits associated with cloud – without taking on any of the additional risk.

These are the types of projects that we really like to work on with customers, to be able to help them gain all the benefits associated with cloud – without taking on any of the additional risk, or the negatives, associated with jumping into cloud with both feet. 

Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation? 

Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.

A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged. 

Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?

Get the
Updated Book

Reyenger:  We are absolutely using the HPE Apollo storage line, and the different flavors of solid-state disks (SSD) down to SATA physical drives. And we are leveraging best-in-breed object storage software from Red Hat. We also have an OpenStack flavor as well.

We leverage things like automation and orchestration technologies, and our ServiceNow capabilities -- all married with our RIP in order to give customers the choice of buying this, deploying it, and having us layer services on top if you want or if you want to consume a fully managed service for something that’s on-premises. I have a per-GB price and the same SLAs as those public cloud providers. So all of it’s coming together to allow customers to really have the true choice and flexibility that everyone claimed you could years ago.

Gardner: I’m afraid we will have to leave it there. We have been exploring how IT architecture and new breeds of service providers are helping enterprises better manage their complex cloud requirements. And we learned how a composable infrastructure and auto-scaling capability have helped a New York cloud services company put together innovative object storage as a service offerings.

So please join me in thanking our guest, Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) in New York. Thank you, Arthur. 

Reyenger: Thank you, Dana. I really appreciate being part of your program.

Gardner: And thanks to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this content along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Monday, November 06, 2017

As Enterprises Face Mounting Hybrid IT Complexity, New Management Solutions Beckon

Transcript of a discussion on how new machine learning and artificial intelligence capabilities are solving hybrid IT complexity challenges.
 
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Analyst podcast series.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. Join us as we hear from leading IT industry analysts and consultants on how to best make the hybrid IT journey to successful digital business transformation.
Our next interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges. We'll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises.

Teich
Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. Welcome, Paul.

Paul Teich: Hi, how are you, Dana?

Gardner: I’m great. You and I have appeared on a number of panels and videos over the years, but it’s nice to have you on my BriefingsDirect podcast. I have been looking forward to this.

Teich: Same here, thanks.

Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT?

Teich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.

Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them -- never mind properly react to it. We really are playing catch up.

Cloud catch-up


Teich: As we look at the options available, the cloud giants -- the public cloud services -- don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.

Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology.

There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first.

As we look to how this might unfold, it seems to me that the previous models of IT management -- agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes -- have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?
 
Learn More About
Solutions From HPE


Teich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet.
When you are an IT manager and you have a lot of folks on public cloud services, you don't have a full picture.

That single pane of glass, looking at a lot of data and information, is soon overloaded. When you are an IT manager, you are at a mid-sized or a large corporation, you have a lot of folks paying out-of-pocket right now, slapping a credit card down on public cloud services, so you don’t have a full picture. Where you do have a picture, there are so many moving parts.

I think we have to get past having a screen full of data, a screen full of information, and to a point where we have insight. And that is going to require a new generation of tools, probably borrowing from some of the machine learning evolution that’s happening now in pattern analytics.

Gardner: The timing in some respects couldn’t be better, right? Just as we are facing this massive problem of complexity of volume and velocity in managing IT across a hybrid environment, we have some of the most powerful and cost-effective means to deal with big data problems just like that.

Life in the infrastructure


Paul, before we go further let’s hear about you and your organization, and tell us, if you would, what a typical day is like in the life of Paul Teich?

Teich: At TIRIAS Research we are boutique industry analysts. By boutique we mean there are three of us -- three principal analysts; we have just added a few senior analysts. We are close to the metal. We live in the infrastructure. We are all former engineers and/or product managers. We are very familiar with deep technology.

My day tends to be first, a lot of reading. We look at a lot of chips, we look at a lot of service-level information, and our job is to, at a very fundamental level, take very complex products and technologies and surface them to business decision-makers, IT decision-makers, folks who are trying to run lines of business (LOB) and make a profit. So we do the heavy lifting on why new technology is important, disruptive, and transformative.

Gardner: Thanks. Let’s go back to this idea of data-driven and analytical values as applied to hybrid IT management and complexity. If we can apply AI and machine learning to solve business problems outside of IT -- in such verticals as retail, pharmaceutical, transportation -- with the same characteristics of data volume, velocity, and variety, why not apply that to IT? Is this a case of the cobbler’s kids having no shoes? You would think that IT would be among the first to do this.

Dig deep, gain insight


Teich: The cloud giants have already implemented systems like this because of necessity. So they have been at the front-end of that big data mantra of volume, velocity -- and all of that.

To successfully train for the new pattern recognition analytics, especially the deep learning stuff, you need a lot of data. You can’t actually train a system usefully without presenting it with a lot of use cases.

The public clouds have this data. They are operating social media services, large retail storefronts, and e-tail, for example. As the public clouds became available to enterprises, the IT management problem ballooned into a big data problem. I don’t think it was a big data problem five or 10 years ago, but it is now.

That’s a big transformation. We haven’t actually internalized what that means operationally when your internal IT department no longer runs all of your IT jobs anymore.
We are generating big data and that means we need big data tools to go analyze it and to get that relevant insight.

That’s the biggest sea change -- we are generating big data in the course of managing our IT infrastructure now, and that means we need big data tools to go analyze it, and to get that relevant insight. It’s too much data flowing by for humans to comprehend in real time.

Gardner: And, of course, we are also talking about islands of such operational data. You might have a lot of data in your legacy operations. You might have tier 1 apps that you are running on older infrastructure, and you are probably happy to do that. It might be very difficult to transition those specific apps into newer operating environments.

You also have multiple SaaS and cloud data repositories and logs. There’s also not only the data within those apps, but there’s the metadata as to how those apps are running in clusters and what they are doing as a whole. It seems to me that not only would you benefit from having a comprehensive data and analytics approach for your IT operations, but you might also have a workflow and process business benefit by being an uber analyst, by being on top of all of these islands of operational data. 
 
Learn More About
Solutions From HPE

To me, moving toward a comprehensive intelligence and data analysis capability for IT is the gift that keeps giving. You would then be able to also provide insight for an uber approach to processes across your entire organization -- across the supply chains, across partner networks, and back to your customers. Paul, do you also see that there’s an ancillary business benefit to having that data analysis capability, and not ceding it to your cloud providers?

Manage data, improve workflow


Teich: I do. At one end of the spectrum it’s simply what do you need to do to keep the lights on, where is your data, all of it, in the various islands and collections and the data you are sharing with your supply chain as well. Where is the processing that you can apply to that data? Increasingly, I think, we are looking at a world in which the location of the stored data is more important than the processing power.

The management of all the data you have needs to segue into visible workflows.
We have processing power pretty much everywhere now. What’s key is moving data from place to place and setting up the connections to acquire it. It means that the management of all the data you have needs to segue into visible workflows.

Once I know what I have, and I am managing it at a baseline effectively, then I can start to improve my processes. Then I can start to get better workflows, internally as well as across my supply chain. But I think at first it’s simply, “What do I have going on right now?”

As an IT manager, how can I rein in some of these credit card instances, credit card storage on the public clouds, and put that all into the right mix. I have to know what I know first -- then I can start to streamline. Then I can start to control my costs. Does that make sense?

Gardner: Yes, absolutely. And how can you know which people you want to give even more credit to on their credit cards – and let them do more of what they are doing? It might be very innovative, and it might be very cost-effective. There might also be those wasting money, spinning their wheels, repaving cow paths, over and over again.

If you don’t have the ability to make those decisions with insight, without the visibility, and then further analyze it as to how best to go about it – it seems to me a no-brainer.

It also comes at an auspicious time as IT is trying to re-factor its value to the organization. If in fact they are no longer running servers and networks and keeping the trains running on time, they have to start being more in the business of defining what trains should be running and then how to make them the best business engines, if you will.

If IT departments needs to rethink their role and step up their game, then they need to use technologies like advanced hybrid IT management from vendors with a neutral perspective. Then they become the overseers of operations at a fundamentally different level. 

Data revelation, not revolution


IT needs to keep a handle on costs -- so you can understand which jobs are running where and how much more capacity you need.
Teich: I think that’s right. It’s evolutionary stuff. I don’t think it’s revolutionary. I think that in the same way you add servers to a virtual machine farm, as your demand increases, as your baseline demand increases, IT needs to keep a handle on costs -- so you can understand which jobs are running where and how much more capacity you need.

One of the things they are missing with random access to the cloud is bulk purchasing. And so at a very fundamental level, helping your organization manage which clouds you are spending on by aggregating the purchase of storage, aggregating the purchase of compute instances to get just better buying power, doing price arbitrage when you can. To me, those are fundamental qualities of IT going forward in a multi-cloud environment.

They are extensions of where we are today; it just doesn’t seem like it yet. They have always added new servers to increasing internal capacity and this is just the next evolutionary step.

Gardner: It certainly makes sense that you would move as maturity occurs in any business function toward that orchestration, automation and optimization – rather than simply getting the parts in place. What you are describing is that IT is becoming more like a procurement function and less like a building, architecture, or construction function, which is just as powerful.

Not many people can make those hybrid IT procurement decisions without knowing a lot about the technology. Someone with just business acumen can’t walk in and make these decisions. I think this is an opportunity for IT to elevate itself and become even more essential to the businesses.

Teich: The opportunity is a lot like the Sabre airline scheduling system that nearly every airline uses now. That’s a fundamental capability for doing business, and it’s separate from the technology of Sabre. It’s the ability to schedule -- people and airplanes – and it’s a lot like scheduling storage and jobs on compute instances. So I think there will be this step.

But to go back to the technology versus procurement, I think some element of that has always existed in IT in terms of dealing with vendors and doing the volume purchases on one side, but also having some architect know how to compose the hardware and the software infrastructure to serve those applications.

Connect the clouds

We’re simply translating that now into a multi-cloud architecture. How do I connect those pieces? What network capacity do I need to buy? What kind of storage architectures do I need? I don’t think that all goes away. It becomes far more important as you look at, for example, AWS as a very large bag of services. It’s very powerful. You can assemble it in any way you want, but in some respect, that’s like programming in C. You have all the power of assembly language and all the danger of assembly language, because you can walk up in the memory and delete stuff, and so, you have to have architects who know how to build a service that’s robust, that won’t go down, that serves your application most efficiently and all of those things are still hard to do.

So, architecture and purchasing are both still necessary. They don’t go away. I think the important part is that the orchestration part now becomes as important as deploying a service on the side of infrastructure because you’ve got multiple sets of infrastructure.
 
Learn More About
Solutions From HPE

Gardner: For hybrid IT, it really has to be an enlightened procurement, not just blind procurement. And the people in the trenches that are just buying these services -- whether the developers or operations folks -- they don’t have that oversight, that view of the big picture to make those larger decisions about optimization of purchasing and business processes.

That gets us back to some of our earlier points of, what are the tools, what are the management insights that these individuals need in order to make those decisions? Like with Sabre, where they are optimizing to fill every hotel room or every airplane seat, we’re going to want in hybrid IT to fill every socket, right? We’re going to want all that bare metal and all those virtualization instances to be fully optimized -- whether it’s your cloud or somebody else’s.

It seems to me that there is an algorithmic approach eventually, right? Somebody is going to need to be the keeper of that algorithm as to how this all operates -- but you can’t program that algorithm if you don’t have the uber insights into what’s going on, and what works and what doesn’t.

What’s the next step, Paul, in terms of the technology catching up to the management requirements in this new hybrid IT complex environment?

Teich: People can develop some of that experience on a small scale, but there are so many dimensions to managing a multi-cloud, hybrid IT infrastructure business model. It’s throwing off all of this metadata for performance and efficiency. It’s ripe for machine learning.
We're moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale.

In a strong sense, we’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale. It’s just going to be looking at a bigger picture, it’s going to be managing more variables, and learning across a lot more data points than a human can possibly comprehend.

We are at this really interesting point in the industry where we are getting deep-learning approaches that are coming online cost effectively; they can help us do that. They have a little while to go before they are fully mature. But IT organizations that learn to take advantage of these systems now are going to have a head start, and they are going to be more efficient than their competitors.

Gardner: At the end of the day, if you’re all using similar cloud services then that differentiation between your company and your competitor is in how well you utilize and optimize those services. If the baseline technologies are becoming commoditized, then optimization -- that algorithm-like approach to smartly moving workloads and data, and providing consumption models that are efficiency-driven -- that’s going to be the difference between a 1 percent margin and a 5 percent margin over time.

The deep-learning difference

Teich: The important part to remember is that these machine-training algorithms are somewhat new, so there are several challenges with deploying them. First is the transparency issue. We don’t quite yet know how a deep-learning model makes specific decisions. We can’t point to one aspect and say that aspect is managing the quality of our AWS services, for example. It’s a black box model.

We can’t yet verify the results of these models. We know they are being efficient and fast but we can’t verify that the model is as efficient as it could possibly be. There is room for improvement over the next few years. As the models get better, they’ll leave less money on the table.

We’re also validating that when you build a machine-learning model that it’s covering all the situations you want it to cover. You need an audit trail for specific sets of decisions, especially with data that is subject to regulatory constraints. You need to know why you made decisions.

So the net is, once you are training a machine-learning model,
Once you are training a machine-learning model, you have to keep retraining it over time. Your model is not going to do the same thing as your competitor's model.
you have to keep retraining it over time. Your model is not going to do the same thing as your competitor's model. There is a lot of room for differentiation, a lot of room for learning. You just have to go into it with your eyes open that, yeah, occasionally things will go sideways. Your model might do something unexpected, and you just have to be prepared for that. We’re still in the early days of machine learning.

Gardner: You raise an interesting point, Paul, because even as the baseline technology services in the multi-cloud era become commoditized, you’re going to have specific, unique, and custom approaches to your own business’ management.

Your hybrid IT optimization is not going to be like that of any other company. I think getting that machine-learning capability attuned to your specific hybrid IT panoply of resources and assets is going to be a gift that keeps giving. Not only will you run your IT better, you will run your business better. You’ll be fleet and agile.

If some risk arises -- whether it’s a cyber security risk, a natural disaster risk, a business risk of unintended or unexpected changes in your supply chain or in your business environment -- you’re going to be in a better position to react. You’re going to have your eyes to the ground, you’re going to be well tuned to your specific global infrastructure, and you’ll be able to make good choices. So I am with you. I think machine learning is essential, and the sooner you get involved with it, the better.

Before we sign off, who are the vendors and some of the technologies that we will look to in order to fill this apparent vacuum on advanced hybrid IT management? It seems to me that traditional IT management vendors would be a likely place to start.

Who’s in?


Teich: They are a likely place to start. All of them are starting to say something about being in a multi-cloud environment, about being in a multi-cloud-vendor environment. They are already finding themselves there with virtualization, and the key is they have recognized that they are in a multi-vendor world.

There are some start-ups, and I can’t name them specifically right now. But a lot of folks are working on this problem of how do I manage hybrid IT: In-house IT, and multi-cloud orchestration, a lot of work going on there. We haven’t seen a lot of it publicly yet, but there is a lot of venture capital being placed.

I think this is the next step, just like PCs came in the office, smartphones came in the office as we move from server farms to the clouds, going from cloud to multi-cloud, it’s attracting a lot of attention. The hard part right now is nailing whom to place your faith in. The name brands that people are buying their internal IT from right now are probably good near-term bets. As the industry gets more mature, we’ll have to see what happens.
 
Learn More About
Solutions From HPE

Gardner: We did hear a vision described on this from Hewlett Packard Enterprise (HPE) back in June at their Discover event in Las Vegas. I’m expecting to hear quite a bit more on something they’ve been calling New Hybrid IT Stack that seems to possess some of the characteristics we’ve been describing, such as broad visibility and management.

So at least one of the long-term IT management vendors is looking in this direction. That’s a place I’m going to be focusing on, wondering what the competitive landscape is going to be, and if HPE is going to be in the leadership position on hybrid IT management.

Teich: Actually, I think HPE is the only company I’ve heard from so far talking at that level. Everybody is voicing some opinion about it, but from what I’ve heard, it does sound like a very interesting approach to the problem.

Microsoft actually constrained their view on Azure Stack to a very small set of problems, and is actively saying, “No, I don’t.” If you’re looking at doing virtual machine migration and taking advantage of multi-cloud for general-purpose solutions, it’s probably not something that you want to do yet. It was very interesting for me then to hear about the HPE Project New Hybrid IT Stack and what HPE is planning to do there.

Gardner: For Microsoft, the more automated and constrained they can make it, the more likely you’d be susceptible or tempted to want to just stay within an Azure and/or Azure Stack environment. So I can appreciate why they would do that.

Before we sign off, one other area I’m going to be keeping my eyes on is around orchestration of containers, Kubernetes, in particular. If you follow orchestration of containers and container usage in multi-cloud environments, that’s going to be a harbinger of how the larger hybrid IT management demands are going to go as well. So a canary in the coal mine, if you will, as to where things could get very interesting very quickly.

The place to be

Teich: Absolutely. And I point out that the Linux Foundation’s CloudNativeCon in early December 2017 looks like the place to be -- with nearly everyone in the server infrastructure community and cloud infrastructure communities signing on. Part of the interest is in basically interchangeable container services. We’ll see that become much more important. So that sleepy little technical show is going to be invaded by “suits,” this year, and we’re paying a lot of attention to it.

Gardner: Yes, I agree. I’m afraid we’ll have to leave it there. We’ve been exploring how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises. And we’ve learned how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT.

We’ve also talked about how artificial intelligence, and specifically, machine learning, will be an important element to solve some of these issues. And we’ve talked about some of the early days of the larger vendors coming to the market with solutions.

Please join me in thanking our guest, Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. Thank you so much, Paul.

Teich: Thanks, Dana. I very much appreciate it.

Gardner: Paul, how can our listeners and readers best follow you to gain more of your excellent insights?

Teich: You can follow us at www.tiriasresearch.com, and also we have a page on Forbes Tech, and you can find us there.

Gardner: A big thank you to our audience as well for joining this BriefingsDirect Voice of the Analyst discussion on how to best manage the hybrid IT journey to digital business transformation.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Follow me on Twitter at @Dana_Gardner, and find more hybrid IT-focused podcasts at briefingsdirect.com. Thanks again for joining, please pass this on to your IT community if you found it valuable, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Transcript of a discussion on how new machine learning and artificial intelligence capabilities are solving hybrid IT complexity challenges. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in: