Showing posts with label Micro Focus. Show all posts
Showing posts with label Micro Focus. Show all posts

Thursday, May 10, 2018

Panel Explores New Ways to Solve the Complexity of Hybrid Cloud Monitoring

Transcript of a discussion exploring a recent survey of more than 500 enterprise cloud professionals and their concerns, requirements, and demands for improving the monitoring, management, and cost control of hybrid and multi-cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Micro Focus.

Dana Gardner: Hello and welcome to a panel discussion on improving performance and cost monitoring of various IT workloads in a multi-cloud world.
Gardner
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your moderator as we learn how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.
Our panel of Micro Focus experts and I will explore new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. We will share their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.
We will also learn about new solutions and hear about examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Traditional monitoring falls short

Let’s begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing -- and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.
Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.
They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.
Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.
The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.
In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing -- but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs -- never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.
Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.
Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.
Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured -- and at compelling prices.
Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.
Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.
IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.
We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe. 
Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to soon learn more about that survey information.
First, please join me in welcoming our panel to learn more about these interesting enterprise trend results. We are here with Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart. We are also joined by Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France. Lastly, Gary Brandt, Product Manager at Micro Focus and based in Sacramento, joins us as well.
Ian, please tell us more about the survey findings.

IT leaders respond to their needs 

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.
More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.
Bromehead
The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].
What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments -- whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.
What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.
The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.
Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.
What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed -- and what depends on the internal, on-premise as well as private and public cloud instances.
People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. So the weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are is not happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.
They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.
It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.
I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.
The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts.
So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.
 
People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs -- the cost of what’s being deployed -- and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.
The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.
For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users -- then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.
Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading.
So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.
So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics 

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.
Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.
Brandt
So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.
Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.
We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on -- they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOps capabilities that Gartner speaks about are great.
But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey -- some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.
Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data? 

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical -- and must be applied across hybrid IT against your resources and map them to your services that you define.
Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.
And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.
When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt.
And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.
We like to refer to this as autonomous operations and it’s part of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridge solution in more detail.

Operations Bridge in more detail  

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.
We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.
Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.
Burose
All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data -- all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.
And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.
This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.
As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting -- independent of the technology -- whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.
They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology -- everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.
Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like Azure, Amazon, or Google Cloud.
We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.
Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.
This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.
Having a full-blown orchestration compatibility within OpsBridge provides more than 5,000 automated workflows, so you can automate different remediation tasks -- or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.
If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.
Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.
Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data -- not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?
 
And you can have this on a network operations center (NOC) wall, on your tablet, or your phone -- wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.
Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.
David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.
Herrera
We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridge is a perfect solution for us.
We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.
The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using -- almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items..
The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.
[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.
 
Our response time will be reduced dramatically because we are able to filter and find what is happening, and call the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.
We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Micro Focus.
Transcript of a discussion exploring a recent survey of more than 500 enterprise cloud professionals and their concerns, requirements, and demands for improving the monitoring, management, and cost control of hybrid and multi-cloud deployments. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in: