Showing posts with label SDS. Show all posts
Showing posts with label SDS. Show all posts

Wednesday, August 30, 2017

Inside Story on Developing the Ultimate SDN-Enabled Hybrid Cloud Object Storage Environment

Transcript of a discussion on how an integrator crafted an innovative storage services capability that provides extensibility into such realms as hybrid IT and multi-cloud support.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next inside story interview explores how a software-defined data center (SDDC)-focused systems integrator developed an ultimate open-source object storage environment. We’re going to learn how Key Information Systems crafted a storage capability that may have broad extensibility into such realms as hybrid cloud and multi-cloud support.

https://www.linkedin.com/in/claytonweise/

Weise

Here to help us better understand a new approach to open-source object storage is Clayton Weise, Director of Cloud Services at Key Information Systems in Agoura Hills, California. Welcome, Clayton.

Clayton Weise: Thank you for having me.

Gardner: What prompted you to improve on the way that object storage is being offered as a service? How might this become a new business opportunity for you?

Weise: About this time last year, at Hewlett Packard Enterprise (HPE) Discover, I was wandering the event floor. We had just gotten out of a meeting with SwitchNAP, which is a major data center in Las Vegas. We had been talking to them about some preferred concepts and deployments for storage for their clients.

That discussion evolved into realizing that there are number of clients inside of Switch and their ecosystem that could make use of storage that was more locally based, that needed to be closer at hand. There were cost savings that could be gained if you have a connection within the same data center, or within the same fiber network.

Pulling data in and out of a cloud

Under this model, there would be significantly less expensive ways of pulling data in and out of a cloud, since you wouldn’t have transfer fees as you normally would. There would also be an advantage to privacy, and to cutting latency, and other beneficial things because of a private network all run by Switch and through their fiber network. So we looked at this and thought this might be interesting.

In discussions with the number of groups within HPE while wandering the floor at Discover, we found that there were some pretty interesting ways that we could play games with the network to allow clients to not have to uproot the way they do things, or force them to do things, for lack of a better term, “Our way.”  

If you go to Amazon Web Services or you go to Microsoft Azure, you do it the Microsoft way, or you do it the Amazon way. You don’t really have a choice, since you have to follow their guidelines.
They generally use object storage as an inexpensive way to store archival or less-frequently accessed data. Cloud storage became an alternative to tape and long-term storage. 

Where we saw value is, there are times in the midmarket space for clients -- ranging from a couple of hundred million dollars up to maybe a couple of billion dollars in annual revenue -- where they generally use object storage as kind of a inexpensive way to store archival, or less-frequently accessed data. So [the cloud storage] became an alternative to tape and long-term storage.

We've had this massive explosion of unstructured data, files, and all sorts of things. We have a number of clients in medical and finance, and they have just seen this huge spike in data.

The challenge is: To deploy your own object storage is a fairly complex operation, and it requires a minimum number of petabytes to get started. In that midmarket, they are not typically measuring their storage in that petabytes level.

These customers are more typically in the tens to hundreds of terabytes range, and so they need an inexpensive way to offload that data and put it somewhere where it makes sense. In the medical industry particularly, there's a lot of concern about putting any kind of patient data up in a public cloud environment -- even with encryption.

We thought that if we are in the same data center, and it is a completely private operation that exists within these facilities, that will fulfill the total need -- and we can encrypt the data.

But we needed a way to support such private-cloud object storage that would be multitenant. Also, we just have had better luck working with open standards. The challenge with dealing with proprietary systems is you end up locked into a standard, and if you pick wrong, you find yourself having to reinvent everything later on.

I come from a networking background; I was an Internet plumber for many years. We saw the transition then on our side when routing protocols first got introduced. There were proprietary routing protocols, and there were open standards, and that’s what we still use today.

Transition to
HPE Data Center Networking

So we took a similar approach in object storage as a private-cloud service. We went down the open source path in terms of how we handled the provisioning. We needed something that integrated well with that. We needed a system that had the multitenancy, that understood the tenancy, and that is provided by OpenStack. We found a solution from HPE called Distributed Cloud Networking (DCN) that allows us to carve up the network in all sorts of interesting ways, and that way we don't have to dictate to the client how to run it.

Many clients are still running traditional networks. The adoption of Virtual Extensible LAN (VXLAN) and other types of SDDC within the network is still pretty low, especially in the mid-market space. So to go to a client and dictate that they have to change how they run the network it is not going to work.

And we wanted it to be as simple as possible. We wanted to treat this as much as we could as a flat network. By using a combination of DCN, Altoline switches from HPE, and some of other software, we were able to give clients a complete network carrying regular Virtual Local Area Networks (VLANs) across it. We then could tie this together in a hybrid fashion, whereby the customers can actually treat our cloud environment as a natural extension of their existing networks, of their existing data centers.

Gardner: You are calling this hybrid storage as a service. It’s focused on object storage at this point, and you can take this into different data center environments. What are some of the sweet spots in the market?
The object service becomes a very inexpensive way to store large amounts of data, and unlike tape -- with object as a service, everything is accessible easily. 

Weise: The areas where we are seeing the most interest have been backup and archive. It’s an alternative to tape. The object service becomes a very inexpensive way to store large amounts of data, and unlike tape -- where it's inconvenient to access the data -- with object as a service everything is accessible very, very easily.

For customers that cannot directly integrate into that object service as supported by their backup software, we can make use of object gateways to provide a method that's more like traditional access. It looks like a file, or file share, and you edit the file share to be written to the object storage, and so it acts as a go-between. For backup and archive, it makes a really, really great solution.

The other two areas where we seen the most interest have been in the medical space, specifically for large medical image files and archival. We’re working now specifically to build that type of solution, with HIPAA Compliance. We have gone through the audits and compliance verification.

The second use-case has been in the media and entertainment industry. In fact, they are the very first to consume this new system and put in hundreds of terabytes worth of storage -- they are an entertainment industry client in Burbank, California. A lot of these guys are just shuffling along on external drives.

For them it’s often external arrays, and it's a lot more Mac OS users. They needed something that was better, and so hybrid object storage as a service has created a great opportunity for them and allows them to collaborate.

They have a location in Burbank, and then they brought up another office in the UK. There is yet another office for them coming up in Europe. The object storage approach allows a kind of central repository, an inexpensive place to place the data -- but it also allows them to be more collaborative as well.

Gardner: We have had a weak link in cloud computing storage, which has been the network -- and you solved some of those issues. You found a prime use-case with backup and archival, but it seems to me that given the storage capabilities that we've seen that this has extensibility. So where it might go next in terms of a storage-as-a service (SaaS) that hybrid cloud providers would use? Where can this go?

Carving up the network 

Weise: It’s an interesting question because one of the challenges we have all faced in the world of cloud is we have virtualized servers and virtualized storage, meaning there is disaggregation; there is a separation between the workload that’s running and the actual hardware it’s running on.

In many cases, and for almost all clients in the mid-market, that level of virtualization has not occurred at the network level. We are still nailed to things. We are all tied down to the cable, to the switch port, and to the human that can figure those things out. It’s not as flexible or as extensible as some of the other solutions that are out there.

In our case, when we build this out, the real magic is with the network. That improved connection might be a cost savings for a client -- especially from a bandwidth standpoint. But as you get a private cross-connect into that environment to make use of, in this case, SaaS, we can now carve that up in a number of different ways and allow the client to use it for other things.

For example, if they want to have burst capability within the environments, they can have it -- and it’s on the same network as their existing system. So that’s where it gets really interesting: Instead of having to have complex virtual guest package (VGP) configurations, and tiny networks, and dealing with some the routing of other pieces, you can literally treat our cloud environment as if it's a network cable thrown over the wall -- and it becomes just an extension of the existing network.

We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. If you have Coke and Pepsi as clients, they will never see each other.
That opens up some additional possibilities. Some things to work on eventually would be block storage, file storage, right there existing on the same network. We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. So if you have Coke and Pepsi as clients, they will never see each other.

Gardner: Very cool. You can take this object storage benefit -- and by the way, the cost of that can be significantly lower because you don’t have egress charges and some of the other unfriendly aspects of economics of public cloud providers. But you also have an avenue into a true hybrid cloud environment, where you can move data but also burst workloads and manage that accordingly. Now, what about making this work toward a multi-cloud capability?

Transition to
HPE Data Center Networking

Weise: Right. So this is where HPE’s DCN software-defined networking (SDN) really starts to shine and separates itself from the pack. We can tie environments together regardless of where they are. If there is a virtual endpoint or physical appliance; if it's at a remote location that can be deployed, which can act as a gateway -- that links everything together.

We can take a client network that's going from their environment into our environment, we can deploy a small virtual machine inside of a public cloud, and it will tie the networks together and allow them to treat it all as the same. The same policy enforcement engine and things that they use to segregate traffic in microsegmentation and service chaining can be done just as easily in the public cloud environment.

One of the reasons we went to Switch was because they have multiple locations. So in the case of our object storage, we deployed the objects across all three of their data center sites. So a single repository that’s written the data is distributed among three different regions. This protects against a possible regional outage that could mean data is inaccessible, and this is the kind of recent thing that we in the US have seen, where clients were down anywhere from 6 to 16 hours.

One big network, wherever you are

This eliminates that. But the nice thing is because of the network technology that they were using from HPE, it allowed us to treat that all as one big network -- and we can carve that up and virtualize it. So clients inside of the data center -- maybe they need resources for disaster recovery or for additional backups or those things -- it's all part of that. We can tie-in from a network standpoint and regardless of where you want to exist -- if you are in Vegas, you may want to recover in Reno, or you may want to recover in Grand Rapids. We can make that network look exactly the same in your location.

You want to recover in AWS? You want to recover in Azure? We can tie it in that way, too. So it opens up these great possibilities that allows this true hybrid cloud -- and not as a completely separate entity.

Gardner: Very cool. Now there’s nothing wrong, of course, with Switch, but there are other fiber and data center folks out there. Some names that begin with “E” come to mind that you might want to drop in this and that should even increase the opportunity for distribution.

Weise: That’s right. So this initial deployment is focused on Switch, but we do a grand scheme to work this into other data centers. There are a handful of major data center operators out there, including the one that starts with an “E” along with another that starts with a “D.” We do have plans to expand this, or use this as a success use-case.

As this continues to grow, and we get some additional momentum and some good feedback, and really refine the offering to make sure we know exactly what everything needs to be, then we can work with those other data center providers.

Whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.
From the data center operators’ perspective, if you're one of those facilities, you are at war with AWS or with Azure. Because whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.

So they have a vested interest in doing this, and there is a benefit to the clients inside of those facilities too because they get to live inside of the ecosystem that exists within those data centers, and the private networks that they carry in there deliver the same benefits to all in that ecosystem.

We do plan to use this hybrid cloud object storage as a service capability as a model to deploy in several other data center environments. There is not only a private cloud, but also a multitenant private cloud that could be operative for clients that have a large enough need. You can talk about this in a multi-petabyte scale, or you talk about thousands of virtual machines. Then it's a question of should you do a private cloud deployment just for you? The same technology, fulfilling the same requirements, and the same solutions could still be used. 

Partners in time

Gardner: It sounds like it makes sense, on the back of a napkin basis, for you and HPE to get together and brand something along these lines and go to market together with it.

Weise: It certainly does. We've had some great discussions with them. Actually there is a group that was popular in Europe that is now starting to take its growth here in US called Cloud28+.

We had some great discussions with them. We are going to be joining that, and it’s a great thing as well.

The goal is building out this sort of partner network, and working with HPE to do that has been extremely supportive. In addition to these crazy ideas, I also have a really crazy timeline for deployment. When we initially met with HPE and talked about what we wanted to do, they estimated that I should reserve about 6 to 8 weeks for planning and then another 1.5 months for deployment.

Transition to
HPE Data Center Networking

I said, “Great we have 3 weeks to do the whole thing,” and everyone thought we were crazy. But we actually had it completed in a little over 2.5 weeks. So we have a huge amount of thanks to HPE, and to their technical services group who were able to assist us in getting this going extremely quickly.

Gardner: It's an interesting and impressive use-case and go-to-market opportunity. I really appreciate you telling us about your object storage-as-a-service hybrid cloud environment.

Good luck with that! I would like extend our thanks to our guest, Clayton Weise, Director of Cloud Services at Key Information Systems in Agoura Hills, California.

Weise: Thank you.

Gardner: And also a big thank you to our audience for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Thanks again for listening. Please pass this along to your cohorts in the IT community, and feel free to come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how an integrator crafted an innovative storage services capability that provides extensibility into such realms as hybrid IT and multi-cloud support. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.


You may also be interested in: