Monday, August 31, 2009

Harnessing 'Virtualization Sprawl' Requires Managing Your Ecosystem of Technologies

Transcript of a sponsored BriefingsDirect Podcast on how companies need to deal with the complexity that comes from the increasing use of virtualization.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on better managing server virtualization expansion across enterprises. We’ll look at ways that IT organizations can adopt virtualization at deeper levels, or across more systems, data and applications, at lower risk.

As more enterprises use virtualization for more workloads to engender productivity from higher server utilization, we often see what can be called virtualization sprawl, spreading a mixture of hypervisors, which leads to complexity and management concerns.

In order to ramp up to more, but advantageous, use of virtualization, pitfalls from heterogeneity need to be managed. Yet, no one of the hypervisor suppliers is likely to deeply support any of the others. So, how do companies gain a top-down perspective of virtualization to encompass and manage the entire ecosystem, rather than just corralling the individual technologies?

Here to help us understand the risks of hypervisor sprawl and how to mitigate the pitfalls to preserve the economic benefits of virtualization is Doug Strain, manager of Partner Virtualization Marketing at HP.

Doug Strain: Thanks, Dana.

Gardner: Help us out. What is the current state of virtualization adoption? Are we seeing significant pickup as a result of the economy? What’s driving all the interest in this?

Strain: Virtualization has been growing very steeply in the last few years anyway, but with the economy, the economic reasons for it are really changing. Initially, companies were using it to do consolidation. They continue to do that, but now the big deal with economy is the consolidation to lower cost -- not only capital cost, but also operating expenses.

Gardner: I imagine the underutilization of servers is like a many-headed dragon. You’ve got footprint, skills, and labor being used up. You’ve got energy consumption. You’ve got the applications and data that might be sitting there that have no real purpose anymore, or all of the above. Is this is a big issue?

Underutilized capacity

Strain: It definitely is. There’s a lot of underutilized capacity out there, and, particularly as companies are having more difficulty getting funding for more capital expenses, they’ve got to figure out how to maximize the utilizations they’ve already bought.

Gardner: And, of course the market around virtualization has been long in building, but we’ve had a number of players, and some dominant players. Do you see any trends about adoption in terms of the hypervisor providers?

Strain: Probably, we’re seeing a little bit of a consolidation in the market, as we get to a handful of large players. Certainly, VMware has been early on in the market, has continued to grow, and has continued to add new capabilities. It's really the vendor to beat.

Of course, Microsoft is investing very heavily in this, and we’ve seen with Hyper-V, fairly good demand from the customers on that. And, with some of the things that Microsoft has already announced in their R2 version, they’re going to continue to catch up.

We’ve also got some players like Citrix, who really leverage their dominance in what’s called Presentation Server, now XenApp, market and use that as a great foot in the door for virtualization.

Gardner: That’s a good point. Now, we introduced this as a server virtualization discussion, but virtualization is creeping into a variety of different aspects of IT. We’ve got desktop virtualization now, and what not. Tell us how this is percolating up and out from its core around just servers.

Strain: Desktop virtualization has been growing, and we expect it to grow further. Part of it is just a comfort within IT organizations that they do know how to virtualize. They feel comfortable with the technology, and now, putting a desktop workload instead of server workload, is sort of a natural way to extend that and to use as resources more wisely.

Probably the biggest difference in the drivers for desktop virtualization is the need for meeting compliance regulations, particularly in financial, healthcare, and in a lot of other industries, where customer or employee privacy is very important. It makes sure that the datas no longer sits on someone’s desk. It stays solely within the data center.

Gardner: So there are a lot of good reasons for virtualizing, and, as you point out, the economy is accelerating that from a pure dollars-and-cents perspective. But this is not just cut and dried. In some respects, you can find yourself getting in too deep and have difficulty navigating what you’ve fallen into.

Easy to virtualize

Strain: That’s definitely true, and because of the fact that all the major vendors now have free hypervisor capabilities, it becomes so easy to virtualize, number one, and so easy to add additional virtual machines, that it can be difficult to manage if technology organizations don’t do that in a planned way.

Gardner: As I pointed out, it’s difficult to go back to just one of the hypervisor vendors and get that full panoply of services across what you’ve got in place at your particular enterprise, which of course might be different from any other enterprise. What’s the approach now to dealing with this issue about not having a single throat to choke?

Strain: There are a couple of dimensions to that. As you said, most of the virtualization vendors do have management tools, but those tools are really optimized for their particular virtualization ecosystem. In some cases, there is some ability to reach out to heterogeneous virtualization, but it’s clear that that’s not a focus for most of the virtualization players. They want to really focus on their environment.

The other piece is that the hardware management is critical here. An example would be, if you’ve got a server that is having a problem, that could very well introduce downtime. You've got to have a way of navigating the virtual machine, so that those are moved off of the server.

That’s really an area where HP has really tried to invest in trying to pull all that together, being

. . . Having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

able to do the physical management with our Insight Control tools, and then tying that into the virtualization management with multiple vendors, using Insight Dynamics – VSE.

Gardner: We’ve discussed heterogeneity when it comes to multiple hypervisors, but we’re also managing heterogeneity, when it comes to mixtures of physical and virtual environments. The hypervisor provider necessarily isn’t going to be interested in the physical side.

Strain: That’s exactly right. And, if they were interested, they don’t necessarily have the in-depth hardware knowledge that we can provide from a server-vendor standpoint. So yeah, clearly there are a few organizations that are 100 percent virtualized, but that’s still a very small minority. So, we think that having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

Gardner: All right. How do we approach this, and is this something that is like other areas of IT we’ve seen, where you start at a tactical level and then over time it gets too complex, too unwieldy, you start taking more strategic overview and come up with methodologies to set some standards up. Is this business as usual in terms of a maturation process?

Strain: I think that’s what we’ve seen in the past. I certainly wouldn't recommend that to somebody today that’s trying to get into virtualization. There are a lot of ways that you can plan ahead on this, and be able to do this in a way that you don't have to pay a penalty later on.

Capacity assessment

It could be something as simple as doing a capacity assessment, a set of services that goes in and looks at what you’ve got today, how you can best use those resources, and how those can be transitioned. In most cases you’re going to want to have a set of tools like some of the ones I’ve talked about with Insight Control and Insight Dynamics VSE, so that you do have more control of the sprawl and, as you add new virtual machines, you do that in a more intelligent way.

Gardner: Tell us a little bit about how that works? I've heard you guys refer to this as "integrated by design." What does that mean?

Strain: We’ve really tried to take all the pieces and make sure that those work together out of the box. One of the things we’ve done recently to up the ante on that is this thing called BladeSystem Matrix. This is really converged infrastructure that allows customer to purchase a blade infrastructure complete with management tools with the services, and a choice of virtualization platforms. They all come together, all work together, are all tested together, and really make that integration seamless.

Gardner: And HP is pretty much neutral on hypervisors. You give the consumer, the customer, the enterprise the choice on their preferred vendor.

Strain: We do. We give them a choice of vendors. The other thing we try to do is give them a choice of platforms. We invest very heavily in certifying across those vendors, across the

What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

broadest range of server and storage platforms. What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

Gardner: Let's take a look at how this works in practice. Do you have any examples, customers that have moved in this direction at a significant level already, and perhaps had some story about what this has done for them?

Strain: I’ve just pulled up a recent case study that we did on a transportation company, called TTX Company. I thought this was a good example, because they’d really tried a couple of different paths. They’d originally done mainframes, and realized that the economics of going to x86 servers made lot more sense.

But, what they found was they had so many servers, they weren’t getting good utilization, and they were seeing the expenses go up, and, at the same time, seeing that they were starting to run out of space in their data center. So, from a pure economic standpoint, they looked at this and said, “Look, we can lower our hardware cost.”

TCO 50 percent lower

In fact, they saw a 10 percent reduced-hardware cost, plus they’re seeing substantial operating expense reductions, 44 percent lower power cost, and also 69 percent reduction in their rack footprint. So, they can now say they are removing it from the datacenter and, compared to their mainframes, they think they have about a 50 percent lower total cost of ownership (TCO).

Gardner: So, if you do this right, they're not just rounding-error improvements. These are pretty substantial.

Strain: These are substantial, and, particularly today, that’s a great way to justify virtualization. What they also found was that, from an IT standpoint, they were much more effective. They project that they can recover much more quickly -- in fact, 96 percent reduction in recovery time. That's going from 24 hours down to 1 hour recovery.

Likewise, they could deploy new servers much more quickly -- 20 minutes versus 4 hours is what they estimate. They’ve reduced the times they have to actually touch a server by a factor of five.

Gardner: So, we’ve seen quite a few new studies that have come out, and virtualization remains in the top tier of concerns and initiatives from enterprises, based on the market research. We’re also seeing interesting things like managing information explosion and reducing redundancy in terms of storaging data. These all come together at a fairly transformative level.

How big a part in what we might consider IT transformation does virtualization play?

Strain: It plays a very substantial role. It’s certainly not the only answer or not the only

The investment in the industry is around management, making it simpler to deploy, to move, to allow redundancy, all those kinds of things, as well as automation.

component of data center transformation, but it is a substantial one. And, it's one that companies of almost any size can take advantage of, particularly now, where some of the requirements for extensive shared storage have decreased. It's really something that almost anybody who's got even one or two servers can take advantage of, all the way to the largest enterprises.

Gardner: So, at a time when the incentives, the paybacks from virtualization activities are growing, we’re seeing sprawl and we’re seeing complexity. This needs to be balanced out. What do you think is the road map? If we had a crystal ball, from your perspective in knowledge of the market, how do we get both? How do we get the benefits without the pain?

Strain: Clearly, this is an area where the entire industry is investing heavily in not just the enabling of the virtualization. That’s been done. There’s still some evolution there, but the steps are getting increasingly smaller. The investment in the industry is around management, making it simpler to deploy, to move, to allow redundancy, all those kinds of things, as well as automation.

There are a lot of tasks that, particularly when you think about a virtual machine, can be run on a range of different hardware, even in different datacenters. The ability to automate based on a set of corporate rules really can make IT much more effective.

Gardner: Great. We’ve been talking about better managing server virtualization expansion across enterprises, and we’ve been joined in our discussion by Doug Strain. He is the manager of Partner Virtualization Marketing at HP. We appreciate it, Doug.

Strain: My pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Transcript of a sponsored BriefingsDirect Podcast on how companies need to deal with the complexity that comes from the increasing use of virtualization. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

No comments:

Post a Comment