Showing posts with label OneSphere. Show all posts
Showing posts with label OneSphere. Show all posts

Friday, August 17, 2018

New Strategies Emerge to Stem the Costly Downside of Complex Cloud Choices

Transcript of a discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.

Listen to the podcast. Find it on iTunes. Get the mobile appDownload the transcript. Sponsor: Hewlett Packard Enterprise

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Analyst podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into successful digital transformation.

Gardner
This hybrid IT management strategies interview explores how jerry-rigged approaches to cloud adoption at many organizations have spawned complexity amid spiraling -- and even unknown -- costs.

We’ll hear now from an IT industry analyst about what causes unwieldy cloud use, and how new tools, processes, and methods are bringing insights and actionable analysis to regain control over hybrid IT sprawl.

Here to help us explore new breeds of hybrid and multicloud management solutions is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. Welcome, Rhett.

Rhett Dillingham: Thank you. Glad to be with you.

Gardner: Rhett, what are some of the drivers making hybrid and multicloud adoption so complex?

Dillingham: Regardless of how an enterprise has invested in public and private cloud use for the last decade, a lot of them ended up in a similar situation. They have a footprint on at least one or multiple public clouds. This is in addition to their private infrastructure, in whatever degree that private infrastructure has been cloud-enabled and turned into a cloud API-available infrastructure to their developers.

Dillingham
They have this footprint then across the hybrid infrastructure and multiple public clouds. Therefore, they need to decide how they are going to orchestrate on those various infrastructures -- and how they are going to manage in terms of control costs, security, and compliance. They are operating cloud-by-cloud, versus operating as a consolidated group of infrastructures that use common tooling. This is the real wrestling point for a lot of them, regardless of how they got here.

Gardner: Where are we in this as an evolution? Are things going to get worse before they get better in terms of these levels of complexity and heterogeneity?

Dillingham: We’re now at the point where this is so commonly recognized that we are well into the late majority of adopters of public cloud. The vast majority of the market is in this situation. We’re going to get worse from an enterprise market perspective.

We are also at the inflection point of requiring orchestration tooling, particularly with the advent of containers. Container orchestration is getting more mature in a way that is ready for broad adoption and trust by enterprises, so they can make bets on that technology and the platforms based on them.

Control issues 

On the control side, we’re still in the process of sorting out the tooling. You have a number of vendors innovating in the space, and there have been a number of startup efforts. Now, we’re seeing more of the historical infrastructure providers invest in the software capabilities and turning those into services -- whether it’s Hewlett Packard Enterprise (HPE), VMware, or Cisco, they are all making serious investments into the control aspect of hybrid IT. That’s because their value is private cloud but extends to public cloud with the same need for control.

Gardner: You mentioned containers, and they provide a common denominator approach so that you can apply them across different clouds, with less arduous and specific work than deploying without containerization. The attractiveness of containers comes because the private cloud people aren’t going to help you deal with your public cloud deployment issues. And the public clouds aren’t necessarily going to help you deal with other public clouds or private clouds. Is that why containers are so popular?
Learn More About
HPE OneSphere
Dillingham: If you go back to the fundamental basis of adoption of cloud and the value proposition, it was first and foremost about agility -- more so than cost efficiency. Containers are a way of extending that value, and getting much deeper into speed of development, time to market, and for innovation and experimentation.

Containerization is an improvement geared around that agility value that furthers cloud adoption. It is not a stark difference from virtual machines (VMs), in the sense of how the vendors support and view it.

So, I think a different angle on that would be that the use of VMs in public cloud was step one, containers was a significant step two that comes with an improved path to the agility and speed value. The value the vendor ecosystem is bringing with the platforms -- and how that works in a portable way across hybrid infrastructures and multi-cloud -- is more easily delivered with containers.

There’s going to be an enterprise world where orchestration runs specific to cloud infrastructure, public versus private, but different on various public clouds. And then there is going to be more commonality with containers by virtue of the Kubernetes project and Cloud Native Computing Foundation (CNCF) portfolio.

That’s going to deliver for new applications -- and those lifted and shifted into containers -- much more seamless use across these hybrid infrastructures, at least from the control perspective.

Gardner: We seem to be at a point where the number of cloud options has outstripped the ability to manage them. In a sense, the cart is in front of the horse; the horse being hybrid cloud management. But we are beginning to see more such management come to the fore. What does this mean in terms of previous approaches to management?

In other words, a lot of organizations already have management for solving a variety of systems heterogeneity issues. How should the new forms of management for cloud have a relationship with these older management tools for legacy IT?

Dillingham: That is a big question for enterprises. How much can they extend their existing toolsets to public cloud?

A lot of the vendors from the private [infrastructure] sector invested in delivering new management capabilities, but that isn’t where many started. I think the rush to adoption of public cloud -- and the focus on agility over cost-efficiency -- has driven a predominance of the culture of, “We are going to provide visibility and report and guide, but we are not going to control because of the business value of that agility.”
The tools have grown up as a delivery on visibility but not the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity.

And the tools have grown up as a delivery on that visibility, versus the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity. That is an advantage to vendors in those different spheres. I see that continuing.

Gardner: You mentioned both agility and cost as motivators for going to hybrid cloud, but do we get to the point where the complexity and heterogeneity spawn a lack of insight and control? Do we get to the point where we are no longer increasing agility? And that means we are probably not getting our best costs either.

Are we at a point where the complexity is subverting our agility and our ability to have predictable total costs?

Growing up in the cloud 

Dillingham: We are still a long away from maturity in effective use of cloud infrastructure. We are still at a point where just understanding what is optimal is pretty difficult across the various purchase and consumption options of public cloud by provider and in comparing that to an accurate cost model for private infrastructure. So, the tooling needs to be in place to support this.

There has been a lot of discussion recently about HPE OneSphere from Hewlett Packard Enterprise, where they have invested in delivering some of this comparability and the analytics to enable better decision-making. I see a lot of innovation in that space -- and that’s just the tooling.

There is also the management of the services, where the cloud managed service provider market is continuing to develop beyond just a brokering orientation. There is more value now in optimizing an enterprise’s footprint across various cloud infrastructures on the basis of optimal agility. And also creating value from services that can differentiate among different infrastructures – be it Amazon Web Services (AWS) versus Azure, and Google, and so forth – and provide the cost comparisons.

Gardner: Given that it’s important to show automation and ongoing IT productivity, are these new management tools including new levels of analytics, maybe even predictive insights, into how workloads and data can best become fungible -- and moved across different clouds -- based on the right performance and/or cost metrics?

Is that part of the attractiveness to a multi- and cross-cloud management capability? Does hybrid cloud management become a slippery slope toward impressive analytics and/or performance-oriented automation?

Dillingham: We’ve had investment in the tooling from the cloud providers, the software providers, and the infrastructure providers. Yet the insights have come more from the professional services’ realm than they have from the tooling realm. That’s provided a feedback loop that can now be applied across hybrid- and multi-cloud in a way that hasn’t come from the public cloud provider tools themselves.
Learn More About
HPE OneSphere
So, where I see the most innovation is from the providers that are trying to address multi-cloud environments and best feed innovation from their customer engagements from professional services. I like the opportunity HPE has to benefit from their acquisitions of Cloud Technology Partners and RedPixie, and then feeding those insights back into [product development]. I’ve seen a lot of examples about the work they’re doing in HPE OneSphere in moving those insights into action for customers through analytics.

Gardner: I was also thinking about the Nimble acquisition, and with InfoSight, and the opportunity for that intellectual property to come to bear on this, too.

Dillingham: Yes, which is really harvesting the value of the control and insights of the private infrastructure and the software-defined orientation of private infrastructure in comparison to the public cloud options.

Gardner: Tell us about Rhett Dillingham. You haven’t been an IT industry analyst forever. Please tell us a bit about your background.

Dillingham: I’ve been a longtime product management leader. I started in hardware, at AMD, and moved into software. Before the cloud days, I was at Microsoft. Next I was building out the early capabilities at AWS, such as Elastic Compute Cloud (EC2) and Elastic Block Store (EBS). Then I went into a portfolio of services at Rackspace, building those out at the platform level and the overall Rackspace public cloud. As the value of OpenStack matured into private use, I worked with a number of enterprises on private OpenStack cloud deployments.

As an analyst, I support project management-oriented, consultative, and go-to-market positioning of our clients.

Gardner: Let’s dwell on the product management side for a bit. Given that the market is still immature, given what you know customers are seeking for a hybrid IT end-state, what should vendors such as HPE be doing in order to put together the right set of functions, processes, and simplicity -- and ultimately, analytics and automation -- to solve the mess among cloud adoption patterns and sprawl?

Clean up the cloud mess 

Dillingham: We talked about automation and orchestration, talked about control of cost, security, and compliance. I think that there is a tooling and services spectrum to be delivered on those. The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

Where are they optimizing on cost based on what they can do in private infrastructure? Where are they setting up decision processes? What incremental services should be adopted? What incremental clouds should be adopted, such as what an Oracle and an IBM are positioning their cloud offerings to be for adoption beyond what’s already been adopted by a client in AWS, Google, and Azure?
The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

I think there’s a synergy to be had across those needs. This spans from the software and services tooling, into the services and managed services, and in some cases when the enterprise is looking for an operational partner.

Gardner: One of the things that I struggle with, Rhett, is not just the process, the technology and the opportunity, but the people. Who in a typical enterprise IT organization should be tasked with such hybrid IT oversight and management? It involves more than just IT.

To me, it’s economics, it’s procurement, it’s contracts. It involves a bit more than red light, green light … on speed. Tell me about who or how organizations need to change to get the right people in charge of these new tools.

Who’s in charge?

Dillingham: More than the individuals, I think this is about the recognition of the need for partnerships between the business units, the development organizations, and the operational IT organization’s arm of the enterprise.

The focus on agility for business value had a lot of the cloud adoption led by the business units and the application development organizations. As the focus on maturity mixes in the control across security and compliance, those are traditional realms of the IT operational organization.

Now there’s the need for decision structure around sourcing -- where how they value incremental capabilities from more clouds and cloud providers is a decision of tradeoffs and complexity. As you were mentioning, of weighing between the incremental value of an additional provider and an incremental service, and portability across those.

What I am seeing in the most mature setups are partnerships across the orientations of those organizations. That includes the acknowledgment and reconciliation of those tradeoffs in long-term portability of applications across infrastructures – against the value of adoption of proprietary capabilities, such as deeper cognitive machine learning (ML) automation and Internet of Things (IoT) capabilities, which are some of the drivers of the more specific public cloud platform uses.

Gardner: So with adopting cloud, you need to think about the organizational implications and refactor how your business operates. This is not just bolting on a cloud capability. You have to rethink how you are doing business across the board in order to take full advantage.

Dillingham: There is wide recognition of that theme. It gets into the nuts and bolts as you adopt a platform and you determine exactly how the operations function and roles are going to be defined. It means determining who is going to handle what, such as how much you are going to empower developers to do things themselves. With the accountability that results, more tradeoffs are there for them in their roles. But it's almost over-rotation and focus to that out of recognition of it and lack of valuation of that more senior-level decision making in what their cloud strategy is.
Learn More About
HPE OneSphere
I hear a lot of cloud strategies that are as simple as, “Yes, we are allowing and empowering adoption of cloud by our development teams,” without the second-level recognition of the need to have a strategy for what the guidelines are for that adoption – not in the sense of just controlling costs, but in the sense of: How do you view the value of long-term portability? How do you value strategic sourcing and the ability to negotiate across these providers long-term with evidence and demonstrable portability of your application portfolio?

Gardner: In order to make those proper calls on where you want to go with cloud and to what degree, across which provider, organizations like HPE are coming up with new tools.

So we have heard about HPE OneSphere. We are now seeing HPE’s GreenLake Hybrid Cloud, which is a use of HPE OneSphere management as a service. Is that the way to go? Should we think of cloud management oversight and optimization as a set of services, rather than a product or a tool? It seems to me that a set of services, with an ecosystem behind them, is pretty powerful.

A three-layer cloud 

Dillingham: I think there are three layers to that. One is the tool, whether that is consumed as software or as a service.

Second is the professional consultative services around that, to the degree that you as an enterprise need help getting up to speed in how your organization needs to adjust to benefit from the tools and the capabilities the tools are wrangling.

And then third is a decision on whether you need an operational partner from a managed service provider perspective, and that's where HPE is stepping up and saying we will handle all three of these. We will deliver your tools in various consumption models on through to a software-as-a-service (SaaS) delivery model, for example, with HPE OneSphere. And we will operate the services for you beyond that SaaS control portal into your infrastructure management, across a hybrid footprint, with the HPE GreenLake Hybrid Cloud offering. It is very compelling.
HPE is stepping up with OneSphere and saying they will handle delivery of tools, SaaS models, and managed cloud services -- all through a control portal.

Gardner: With so many moving parts, it seems that we need certain things to converge, which is always tricky. So to use the analogy of properly intercepting a hockey puck, the skater is the vendor trying to provide these services, the hockey puck is the end-user organization that has complexity problems, and the ice is a wide-open market. We would like to have them all come together productively at some point in the future.

We have talked about the vendors; we understand the market pretty well. But what should the end-user organizations be starting to do and think in order for them to be prepared to take advantage of these tools? What should be happening inside your development, your DevOps, and that larger overview of process and organization in order to say, “Okay, we’re going to take advantage of that hockey player when they are ready, so that we can really come together and be proficient as a cloud-first organization?”

Commit to an action plan

Dillingham: You need to have a plan in place for each element we have talked about. There needs to be a plan in place for how you are maturing your toolset in cloud-native development… how you are supporting that on the development side from a continuousintegration (CI) and continuous delivery (CD) perspective; how you are reconciling that with the operational toolset and the culture of operating in a DevOps model with whatever degree of iterative development you want to enable.

Is the tooling in place from an orchestration and development capability and operations perspective, which can be containers or not? And that gets into container orchestration and the cloud management platforms. There is the control aspect. What tooling you are going to apply there, how you are going to consume that, and how much you want to provide it as a consultative offer? And then how much do you want those options managed for you by an operational partner? And then how you are going to set up your decision-making structure internally?

Every element of that is where you need to be maturing your capabilities. A lot of the starting baseline for the consultative value of a professional services partner is walking you through the decision-making that is common to every organization on each of those fronts, and then enabling a deep discussion of where you want to be in 3, 5, or 10 years, and deciding proactively.

More importantly than anything, what is the goal? There is a lot of oversimplification of what the goal is – such as adoption of cloud and picking of best-of-breed tools -- without a vision yet for where you want the organization to be and how much it benefits from the agility and speed value, and the cost efficiency opportunity.

Gardner: It’s clear that those organizations that can take that holistic view, that have the long-term picture in mind, and can actually execute on it, have a significant advantage in whatever market they are in. Is that fair?
Learn More About
HPE OneSphere
Dillingham: It is. And one thing that I think we tend to gloss over -- but does exist -- is a dynamic where some of the decision-makers are not necessarily incentivized to think and consider these options on a long-term basis.

The folks who are in role, often for one to three years before moving to a different role or a different enterprise, are going to consider these options differently than someone who has been in role for 5 or 10 years and intends to be there through this full cycle and outcome. I see those decisions made differently, and I think sometimes the executives watching this transpire are missing that dynamic and allowing some decisions to be made that are more short-term oriented than long-term.

Gardner: Maybe people at the board of directors’ level should familiarize themselves more with cloud management capabilities as we go forward.

I’m afraid we’re going to have to leave it there. We have been exploring how jerry-rigged approaches to cloud adoption at many organizations has spawned complexity and spiraling costs. And we have also learned about new breeds of hybrid and multi-cloud management solutions that are bringing insights and even actionable analysis to help regain control over hybrid IT sprawl.

So please join me in thanking our guest, Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. Thank you so much, Rhett.

Dillingham: It’s been a pleasure, Dana.

Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Analyst hybrid IT management strategies interview.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host on this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile appDownload the transcript. Sponsor: Hewlett Packard Enterprise

Transcript of a discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in:

Monday, May 07, 2018

How HudsonAlpha Transforms Hybrid Cloud Complexity Into an IT Force Multiplier

Transcript of a discussion on how HudsonAlpha is testing a new Hewlett Packard Enterprise solution, OneSphere, to gain a simple and more common interface to manage hybrid computing.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.
Gardner
Our next hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments. We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.
Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Welcome, Katreena.
Katreena Mullican: Thank you, Dana. Thank you for having me as a part of your podcast.
Gardner: We’re delighted to have you with us. What’s driving the need to solve hybrid IT complexity at HudsonAlpha?
Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.
Mullican
We support hundreds of researchprograms with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.
Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?
Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.
Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.

Software-defined everything


Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.
Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize -- yet allow for experimentation, too?
Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.
Simplified 

Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.
Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.
Gardner: What about the IT requirements? How you fulfill that mandate using technology?
Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers. 
Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?
Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.
Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?

Agnostic arrangements of clouds

Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.
There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind -- so you don’t lock into any one solution necessarily -- then you can optimize and put everything together.
Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.

Cloud cost control, granted

Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud -- from storage to compute -- and that cost is covered by the grant.
So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.
Simplified 
Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?

Variety is the spice of IT

Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.
The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well.
We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access -- through a common API -- that infrastructure. So it’s very exciting.
Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?
Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully -- and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.
Gardner: What this might capability bring to you and your organization?

Cross-training the clouds

Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds -- AWS, Google, Microsoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.
For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.
But the key here is that our developer and researchers -- the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline -- and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.
Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.
Simplified 
Mullican: Right, and that will be an ongoing effort to always optimize cost -- but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.
Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.
Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.
Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.
The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.
As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE Synergy, HPE SimpliVity HCI, and HPE 3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a ‘whole greater than the sum of the parts’ effect?
 
Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer – open-source operating systems and applications.
The ability to utilize that in the mix that I have architected natively with OneSphere -- in addition to the public clouds -- is very powerful, and I am excited to see where that goes.
Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure?

Get adept at DevOps

Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.
As far as putting an emphasis on automation -- and being able to provision infrastructure in the datacenter and the cloud through automated APIs -- a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.
It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources -- that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.
That’s the partnership that companies really need to foster -- and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.
On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.
Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.
We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.

Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?

HCI advances across the globe

Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.
And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.
Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.
Is there something about having that visibility into hybrid IT that extends into hybrid data environments?
Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.
So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.
Simplified 
Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.

Hybrid hard work pays off

Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.
HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.
Gardner: It was a long time coming, right?
Mullican: Yes, yeah. I think so.
Gardner: I’m afraid we will have to leave it there. We have been exploring how nonprofit research institute HudsonAlpha is better managing its multiple cloud and hybrid IT deployment environments. And we have learned how HPE OneSphere is delivering consolidated and deep insights across multiple clouds and IT deployments at HudsonAlpha, an early beta tester and user.
So please join me in thanking our guest, Katreena Mullican, ‎Senior Architect and Cloud Whisperer at ‎HudsonAlpha Institute for Biotechnology.
Mullican: Thank you very much.
Gardner: And a big thank you to our audience as well for joining us for this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening. Please pass this content along to your IT community and do come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Transcript of a discussion on how HudsonAlpha is testing a new Hewlett Packard Enterprise solution, OneSphere, to gain a simple and more common interface to manage hybrid computing. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.
You may also be interested in: