Tuesday, February 02, 2010

Security, Simplicity and Control Ease Make Desktop Virtualization Ready for Enterprise Uptake

Transcript of a BriefingsDirect podcast on the future of desktop virtualization and how enterprises can benefit from moving to this model.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we provide a sponsored podcast discussion on the growing interest and value in PC desktop virtualization strategies and approaches. Recently, a lot has happened technically that has matured the performance and economic benefits of desktop virtualization and the use of thin-client devices.

In desktop virtualization, the workhorse is the server, and the client assists. This allows for easier management, support, upgrades, provisioning, and control of data and applications. Users can also take their unique desktop experience to any supported device, connect, and pick up where they left off. And, there are now new offline benefits too.

At the same time as this functional maturity improved, we are approaching an inflection point in a market that is accepting of new clients and new client approaches like desktop virtualization.

Indeed, the latest desktop virtualization model empowers enterprises with lower total costs, greater management of software, tighter security, and the ability to exploit low-cost, low-energy thin client devices. It's an offer that more enterprises are going to find hard to refuse.

Here now to help us learn more about the role and outlook for desktop virtualization, we're joined by Jeff Groudan, vice president of Thin Computing Solutions at HP. Welcome to the show, Jeff.

Jeff Groudan: Thanks for having me, Dana.

Gardner: As I mentioned, there's a lot happening in the trends in the market that are supporting more interest in virtualization generally. We see server, storage, network, and now this desktop thing really catching on. I think it's because of the economics.

Market drivers

Groudan: There certainly are some things in the market that are sure driving a potential inflection point here. The market-driven things coming out of the recession are opening a lot of customers up to re-looking at some deployments that they may have delayed or specific IT projects that they have put on hold.

In addition, there has been an ongoing desire to increase security and a lot of new compliance requirements that the customers have to address. In addition, in general, as they are looking for ways to save on costs, they are consistently and constantly looking for different ways to more efficiently manage their distributed PC environments. All of these things are driving the high level of interest in PCs.

Gardner: With regards to this pent-up demand issue, we've certainly seen the Windows desktop environment, the operating system, now coming out with a very important upgrade and improvement with Windows 7. We've also seen of course some improvements on the hypervisor market for desktop virtualization. Do you have any sense of where this pent-up demand is really going to lead in terms of growth?

Groudan: In addition to the market drivers, we're seeing technology drivers that also are going to help line up for a real uptick in the size and rate of deployments on client virtualization.

You touched on the operating system trends. I think there has been some pause in operating system upgrades with Vista, as companies wait for Windows 7, and with that coming out in addition to Server 2008 R2 from Microsoft, as well as other updates from other virtualization software providers. You're really seeing a maturing of the client virtualization software in conjunction with the maturing of the next-generation Microsoft operating systems that are a catalyst here.

. . . You're seeing more powerful, yet cost-effective, thin clients that you can put on the desk and that really ensure those end-users get the experience that you want them to get.



You're also seeing better performance on the hardware side and the infrastructure side. It's really also helping bring the cost per seat of the client virtualization deployment down into ranges that are lot more interesting for large deployments. Last, and near and dear to my heart, you're seeing more powerful, yet cost-effective, thin clients that you can put on the desk and that really ensure those end-users get the experience that you want them to get.

Gardner: It seems like enterprises are going to be faced with some major decisions about their client strategies, and if you are going to be facing this inflection point you might as well look at the full panoply of options at your disposal.

Groudan: Absolutely. Just to put it into context, there was recently some data from Gartner. They feel like there are well over 600 million desktop PCs in offices today. Their belief is that over the next five years, upwards of 15 percent of those could be replaced by thin clients. So that's quite a number of redeployments and quite an inflection point for client virtualization.

Gardner: I suppose another motivation for IT departments and enterprises is that they're looking at security, compliance, and regulatory issues that also make them re-evaluate their management approach as to how data and applications are delivered.

Security nightmare

Groudan: Absolutely. There are a variety of areas that are relevant for customers to look at right now. On security, you're absolutely right. Every IT manager's nightmare scenario is to have their company on the front page of The Wall Street Journal, talking about a lost laptop, a hack, or some other way that personal data, patient data, or financial data somehow got out of their control into the wrong hands.

One of the key benefits of client virtualization is the ability to keep all the data behind the firewall in the data center and deploy thin clients to the edge of the network. Those thin clients, by design, don't have any local data.

Gardner: I suppose another relevant aspect of this is that it's not necessarily rip-and-replace. You are not going to take 600 million PCs and put in thin clients, but you can start working at the edge to identify certain classes of users, certain application sets, perhaps a call center environment, and start working on this on a graduated basis.

Groudan: You certainly can. Our general coaching to customers is that it's not necessary for everyone, for every user group, or every application set. But, certainly, for environments where you need to get them more manageable, you need more flexibility.

When you think about the cost savings of client virtualization, usually the costs come from some of the long-term acquisition costs.



You need higher degrees of automation in order to manage a high number of distributed PCs with the benefits from centralized control, reduced labor costs, and the ability to manage remote or hard to get at locations -- things like branches, where you don't have a local IT. Those are great targets for early client virtualization deployments.

Gardner: I suppose another big issue in the marketplace now is how to increase automation. When you control the desktop experience from a server or data-center infrastructure, you've got that opportunity to automate these processes and get off that treadmill of trying to deal with each and every end point physically or at least through a labor approach.

Groudan: Exactly. When you think about the cost savings of client virtualization, usually the costs come from some of the long-term acquisition costs. Because the lifecycle of these solutions are closer to four or five years, you haven't acquired the same amount of equipment on the same cadence.

But, the big savings come from the people savings. The automation and the manageability mean you need fewer people dedicated to managing distributed PCs and the break-fix and help desk associated with that.

You can do two things with those efficiencies. You can either cut some cost, which, at some point, is the right approach. Increasingly, what we see is that rather than just cut cost, people re-deploy resources toward more value-generation oriented activities versus a cost center that you have to have to manage PCs. You can take resources and focus them on value-add generation projects that add to the bottom-line from the business efficiency perspective versus just our cost.

Gardner: In other ways, there is an interesting point because the total solution here has to involve those data center operators, the architects, and then the PC edge client folks. Now these may have been separate in some organizations, but what's HP's advice? Are you encouraging more collaboration and cooperation to strategize between the client group, and then the delivery of the infrastructure side?

Think beyond technical

Groudan: You really need to. That's been one of the inhibitors to earlier growth on client virtualization -- figuring out the business processes to get the data center guys and the edge of the network guys working on a combined plan. One key to success is clearly to be thinking beyond simply the technical architecture to how the business processes inside a company need to change.

All of a sudden, the data-center guys need to be thinking about the end-user. The end-user guys need to be thinking about the data center. Roles and responsibilities need to be hammered out. How do you charge the capital expense versus operational expense? What gets budgeted where? My advice is: as you're thinking about the technical architecture and all of the savings end-to-end, you need to also be thinking about the internal business processes.

Gardner: What that tells me is that this is not just about buying components and slapping in thin clients. This is really something you need to look at from a total solutions perspective. Do some planning, but the more total approach you take, the bigger economic payoff will be.

Groudan: That's absolutely right.

Gardner: Let's go back quickly to security. I remember when I first started hearing about desktop virtualization, somebody mentioned to me that all those agencies in Washington with the three-letter acronyms, the spooky guys, are all using desktop virtualization, because they can lock down the device and close off the USB port.

One of the beautiful things about a thin client is that when you unplug it from the network, it's basically a paperweight . . .



When that thing is shut off or that user logs out, there is no data and no inference. Nothing is left on the client. Everything is on the server. It's how you can really manage security. We are talking about taking that same benefit now to your enterprise users, your road warriors, and perhaps even remote branches. Right?

Groudan: That's absolutely correct. One of the beautiful things about a thin client is that when you unplug it from the network, it's basically a paperweight, and, from a security perspective, thin clients are getting pretty small too. People could take that thin client, put it in their briefcase, walk out with it, and they have nothing. They have no IT assets, no personal data, no R&D secrets, or whatever else there may be.

From a security perspective, they're very, very low power, designed to be remotely managed, and designed to be plug-and-play replaceable. From a remote IT perspective, on the very rare chance that a thin client breaks, you take one from the storage closet where you keep a couple of spares, plug it in, and you're up and running in five or 10 minutes.

Gardner: So, even if all things were equal in terms of the cost of operating and deploying these, just the savings in securing up your data and application seems like a pretty worthwhile incentive?

Groudan: It really does. Not all customers may have that kind of burning needs to secure data, but it's a drop-dead simple way of ensuring that there is no data out there on the edge of the network that you don't know about. It really gives you some confidence that you know where the data is and you know there are limited ways to get into that data. If you put the right security process in place, you know they're going to work independent of whether thousands of end-users follow all the processes, which is hard to mandate.

Gardner: What does HP mean by desktop virtualization? There has been some looseness around the topic. Some people focus on a business to consumer (B2C) approach, highly scaling, perhaps a limited number of apps, and through a telecom provider. Other folks are now in the market with solutions that are business to employee (B2E), that is your employee-focused solutions. Where does HP come down on this? What do you think is the most important approach and how do you define it in the market?

Views of the market

Groudan: We look at this market in two ways, in the context of client virtualization and in the broader context of thin computing. Just zeroing in on client virtualization, we call it Client Virtualization HP. It's desktop virtualization. It's the same animal.

We look it as a specific set of technologies and architectures that dis-aggregate the elements of a PC, which allows customers to more easily manage and secure their environment. What we're really doing is taking advantage of a lot of the new software capabilities that matured on the server side, from a server virtualization and utilization perspective. We're now able to deploy some of those technologies, hypervisors, and protocols on the client side.

We still see it is a fairly B2E-focused paradigm. You can certainly draw up on a whiteboard other models for broader audiences, but today we see most of the attraction and interest as more of a B2E model. As you touched on earlier, it's generally targeted at specific user groups and specific applications versus everybody in your environment.

Our specific objective is figuring out how to simplify virtualization, so that customers get past the technology, and really start to deliver the full benefit of virtualization, without all the complexity.

Gardner: There is a significant integration aspect of this. We talked about how you've got different groups within IT that are going to be affected, but you've got to be able to integrate component software, hypervisors, and management of data. It's a shift.

If you think about PCs 20-25 years ago, customers didn't know how to architect a distributed PC environment. In 25 years, everybody has gotten good at it.



Groudan: We've were an early entrant in client virtualization, so we've got quite a track record behind us. What we learned led us to focus on a few things.

The first is that you don't want to have customers having to figure out how to architect the stuff on their own. If you think about PCs 20-25 years ago, customers didn't know how to architect a distributed PC environment. In 25 years, everybody has gotten good at it. We're still at the early stages on client virtualization.

So our focus is to deliver more complete integrated solutions, end to end from the desktop to the data center, lay it all out, and reference designs so customers can very comfortably understand how to go build out a deployment. They certainly may want to customize it. We want to get them 80-90 percent there just by telling them what our learnings have been.

The second thing we try to do is to give them best-in-class platforms. From a thin-client perspective, this is important, because you need to make sure that the end-user actually gets the experience that they are used to. One of the best ways to install a deployment is having the end-users say, "Hey, I've got a better experience on my desktop." Having thin clients that are designed from the ground up to deliver a desktop class experience is really critical.

Last, we need to make sure we've got the right ease of use and manageability tools in place, so this IT complexity can be removed. They know they can manage the virtual environments. They can manage the physical environments. They can manage the remote thin clients. We don't make these things too complex for the IT guys to actually deploy and manage.

Some trepidation

Gardner: Now, there has been some trepidation in the market. People say, "Is this ready for prime-time?" Let's focus a little bit on what's been holding people up. I don't think it's necessarily the software.

When I talk to Microsoft people, they seem to be jazzed about desktop virtualization. Of course, you're still getting a license to use that desktop, and perhaps even it's aligned with a lot of the other server side products and services that Microsoft provides.

So, there is alignment by the software community. What's been holding up people, when they think of this desktop virtualization?

Groudan: There's been a handful of things. In the early days, there were still some gaps in the experience that the end-users would get -- multimedia, remoting, USB peripherals, and those kinds of things. HP and the broader industry ecosystem has done a lot in a year or two to close those gaps with specific pieces of software, high-performing thin clients, etc. We're at a point now, where you can feel pretty good that the end-users are going to get a very relevant experience as they compare to a desktop.

Second, the solutions are complicated, or we let them be complicated, because we put a lot of components in front of our customers, rather than complete solutions. By delivering more reference design models and tools you take away some of the complexity around the design, the set up, and the configuration that customers were facing in the early days.

There are opportunities for just about every industry.



Third, management software. Earlier, you didn't have single tool that would let you manage both the physical and the virtual elements of the desktop virtualization environment. HP and others have closed those gaps, and we have very powerful management tools that make this easy on an IT staff.

Last, it was hard to initially quantify where some of the cost savings have come from. Now, there are total cost of ownership (TCO) analysis tools, understanding where the savings can come from, and how you can take advantage of those savings. It's a lot better understood, and customers are more comfortable that they understand the return on investment (ROI).

Gardner: Are there certain types of enterprises that should be looking at this? In my mind, if you've already dived into virtualization, you're getting comfortable with that and you're getting some expertise on it. If you're also thinking about IT shared services in a service bureau approach to IT, your culture and organization might be well aligned to this. Are there any other factors that you can think of, Jeff, that might put up a flag that says, "We're a good candidate for this?"

Groudan: There are opportunities for just about every industry. We've seen certain verticals on the cutting edge of this. Financial services, healthcare, education, and public sector are a few examples of industries that have really embraced this quickly. They have two or three themes in common. One is an acute security need. If you think about healthcare, financial services, and government, they all have very acute needs to secure their environments. That led them to client virtualization relatively quickly.

Parallel needs

Financial services and education both have some consistency around having large groups of knowledge workers in small locations. That lends itself very well to client virtualization type deployments. Education and healthcare both have a need for large, remote, campus type environments, where they have a need for a lot of PCs or desktop virtualization seats, a mobile campus environment. That's another sort of environments and use case that lends itself very well to these kinds of architectures.

Gardner: As I said earlier, it seems like an offer that's hard to refuse. It's just getting everything lined up. There are so many rationales that support this. But, in this economy, it's the dollar and cents that are the top concern, and will be for a while.

Do you have any examples of companies that have taken a plunge, done some desktop virtualization, perhaps with a certain class of user, perhaps in a call center environment or remote branch? What's been the experience and what are the paybacks at least economically?

Groudan: I'll give you two examples. First, is out of the education environment. They were trying to figure out how to increase reliability, while improving student access and increasing the efficiency of their IT staffs, because the schools are always challenged having sufficient IT resources.

They're able to rest easy that that kind of information isn't going to somehow get out into the public domain.



They deployed desktop virtualization deployment with HP infrastructure and thin clients. They felt like they would lower the total cost, increase the up time for the students and in the classroom, increase the teacher productivity, because they are able to teach instead of trying to maintain PCs in the classroom that weren't necessarily working. They freed up their IT staff to go work on other value-added projects.

And, most important for a school, they increased the access and productivity of the students. To make that very real for you, students may only have one or two hours in front of the computer a day in school and they maybe doing many, many different things. So, they don't get that much time on an application or a project in school.

The solution that this Hudson Falls School deployed let the students access those applications from home. So, they could spend two or three hours a night from home on those applications getting very comfortable with them, getting very productive with them, and finishing their projects. It was a real productivity add for the students.

The second example is with Domino's Pizza. Many of us are familiar with them. They were struggling with the challenges of having a lot of remote sites and a lot of terminals that are shared. Supporting those remote sites, trying to maintain reliability, and keeping customer data secure were their burning needs, and they were looking for an alternative solution.

They deployed client virtualization with HP thin clients and they found they could lower their costs on an annual basis by $400 per seat, and they've gotten much longer life out of the terminals. They increased the up-time of the terminals and, by extension, limited the support required on site.

Then, by using this distributed model, where the data is back in a data center somewhere, they really secured customer data, credit card information, and those kinds of things. They're able to rest easy that that kind of information isn't going to somehow get out into the public domain.

Gardner: A couple of things that jump out at me from this is that all that data back on the server is really going to benefit your business intelligence (BI), analytics, auditing, reporting and those sorts of activities, when you don't have all that data out on all those clients, where you can't really easily get to it or manage it.

Value of data mining

Groudan: For, any company that has a lot of customer data, the ability to mine that data for trends, information, opportunity, or promotions is incredibly valuable.

Gardner: The other thing that jumped out at me is that this brings up the notion that if this works for PCs and thin clients, what about kiosks? What about public-facing visual interfaces of some kind? Can you give us a hint of what the future holds, if we take this model a step further?

Groudan: Sure, it brings up one of the themes I want to talk about. HP's unique vision is that client virtualization is just one of many ways of using thin computing to enable a lot of different models beyond just replacing the traditional desktop. As you mentioned, anywhere that's hard to get to, hard to maintain, or hard to support is a perfect opportunity to deploy thin computing solutions.

Kiosks and digital signage are generally in remote locations. They can be up on a wall somewhere. The best answer for them is to be connected remotely, so you can just manage them from centralized location.

. . . Thin computing ultimately is going to be much broader than the B2E client virtualization models that we're probably most familiar with.



We certainly see kiosks and signage as a great opportunity for thin computing. We do see some other opportunities to bring thin computing into the home and into small-medium business through the use some of the cloud trends and cloud applications and services. We've all seen some of the trends on. To me, thin computing ultimately is going to be much broader than the B2E client virtualization models that we're probably most familiar with.

Gardner: Obviously, HP has a lot invested here, a good stake in the future for you. Anything we should expect in the near future in terms of some additional innovation on this particularly on the B2B?

Groudan: Yeah, well, I can't talk about it too much, but we certainly have some very exciting launches coming up in the next couple of months where we're really focused on total cost per seat. How do we let people deploy these kinds of solutions and continue to get further economic benefits, delivering better tighter integration across the desktop to the data center?

The ease of deployment of these solutions can get easier-and-easier, and then ease of use and manageability tools. They allow the IT guys to deploy large deployments of client virtualization with as little touch and as little complexity as we can possibly make it. We're trying to automate these kinds of solutions. We're very excited about some of the things we'll be delivering to our customers in the next couple of months.

Gardner: Okay, very good. We've been talking about the growing interest and value in PC desktop virtualization strategies and approaches. I've learned quite a bit. I want to thank our guest today, Jeff Groudan, vice president of Thin Computing Solutions at HP. Thanks for joining, Jeff.

Groudan: My pleasure, Dana. Thanks for having us.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the future of desktop virtualization and how enterprises can benefit from moving to this model. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

Monday, February 01, 2010

Technology, Process and People Must Combine Smoothly to Achieve Strategic Virtualization Benefits

Transcript of a BriefingsDirect podcast on how to take proper planning, training and management steps to avoid virtualization sprawl and achieve strategic-level benefits.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on planning and implementing data-center virtualization at the strategic-level in enterprises.

Because companies generally begin their use of server virtualization at a tactical level, there is often a complex hurdle in expanding the use of virtualization. Analysts predict that virtualization will support upwards of half of server workloads in just a few years. Yet, we are already seeing gaps between an enterprise’s expectations and their ability to aggressively adopt virtualization without stumbling in some way.

These gaps can involve issues around people, process and technology and often, all three in some combination. Process refinement, proper methodological involvement, and swift problem management often provide proven risk reduction, and provide surefire ways of avoiding pitfalls as virtualization use moves to higher scale.

The goal becomes one of a lifecycle orchestration and governed management approach to virtualization efforts so that the business outcomes, as well as the desired IT efficiencies, are accomplished.

Areas that typically need to be part of any strategic virtualization drive include sufficient education, skilled acquisition, and training. Outsourcing, managed mixed sourcing, and consulting around implementation and operational management are also essential. Then, there are the usual needs around hardware, platforms and system as well as software, testing and integration.

So, we’re here with a panel of Hewlett Packard (HP) executives to examine in-depth the challenges of large scale successful virtualization adoption. We’ll look at how a supplier like HP can help fill the gaps that can hinder virtualization payoffs.

Please join me in welcoming our panel: Tom Clement, worldwide portfolio manager in HP Education Services. Welcome to BriefingsDirect, Tom.

Tom Clement: Thank you, Dana. Great to be here.

Gardner: We're also here with Bob Meyer, virtualization solutions lead with HP Enterprise Business. Hey, Bob.

Bob Meyer: Hey, Dana.

Gardner: And we’re here with Dionne Morgan, worldwide marketing manager at HP Technology Services. Hello, Dionne.

Dionne Morgan: Hello, Dana.

Gardner: Ortega Pittman, worldwide product marketing, HP Enterprise Services, joins us. Hello, Ortega.

Ortega Pittman: Hi, Dana.

Gardner: And lastly, Ryan Reed, worldwide marketing manager at HP Enterprise Business. Hello, Ryan.

Ryan Reed: Hi, Dana, thanks for having me.

Gardner: I want to start by looking at this notion of a doubling of the workload supported by virtualization in just a few years. Why don’t we start with Bob Meyer? Bob, tell me why companies are aggressively approaching the move from islands of servers to now oceans of servers.

Headlong into virtualization

Meyer: Yeah, it's interesting. People, had they known an economic downturn was coming, might have thought that it would have slowed down like the rest of IT spending, but the downturn really forced anybody who is on the front to go headlong into virtualization. Today, we are technically ahead of where we were a year or two years ago with virtualization experience.

Everybody has experience with it. Everybody has significant amounts of virtualization in the production environment. They’ve been able to get a handle on what it can do to see what the real results and tangible benefits are. They can see, especially on the capital expenditure side, what it could do for the budgets and what benefits it can deliver.

Now, looking forward, people realize the benefits, and they are not looking in it just as an endpoint. They're looking down the road and saying, "Okay, this technology is foundational for cloud computing and some other things." Rather than slowing down, we’ll see those workloads increase.

They went from just single percentage points a year and a half ago to 12-15 percent now. Within two years, people are saying it should be about 50 percent. The technology has matured. People have a lot of experience with it. They like what they see in results, and, rather than slow down, it's bringing efficiency to things like the new service model.

Gardner: Ortega Pittman, do you see any other issues around these predictions? The expansion of virtualization seems to be outstripping the skill sets that are available to support it.

Pittman: That's where HP Enterprise Services comes to add value with meeting customers' needs around skills. Many, times small, medium, and large organizations have the needs, but might not have the skills on hand. In providing our outsourcing services, we have the experienced professionals who can step right in and immediately begin the work and the strategic path towards their business outcomes.

The skill demand and the instant ability to get started is something that we take a lot of pride in, and in the global track record of doing that very well is something that HP Enterprise Services can bring from an outsourcing perspective.

Gardner: Dionne Morgan, what are some of the risks, if folks start embarking on this without necessarily thinking it through at a life-cycle level? Are there some examples that you have experienced, where the hope for benefits -- economic and otherwise: agility benefits, flexibility, and elasticity -- somehow end up being imperiled by not being prepared?

Morgan: Many people have probably heard the term "virtual machine sprawl" or "VM sprawl," and that's one of the risks. Part of the reason VM sprawl occurs is because there are no clear defined processes in place to keep the virtualized environment under control.

Virtualization makes it so easy to deploy a new virtual machine or a new server, that if you don’t have the proper processes in place, you could have more and more of the these virtual machines being deployed and you lose control. You lose track of them.

That's why it's very important for our clients to think about not only how they're going to design and build this virtualization solution, but how they're going to continue to manage it on an on-going basis, so they keep it under control and they prevent that VM sprawl from occurring.

Gardner: We’ve talked about this people, process, and technology mixture that needs to come together well. Tom Clement, from that perspective of education, are there things about virtualization that are dramatically or significantly different than what we might consider traditional IT operations or implementation?

Clement: Certainly, there are. When you talk about people, process, and technology, you hit upon the key elements of virtualization project success. There is no doubt in my mind that HP provides the best-in-class virtualization technology to our clients hands down. But, our 30-plus years of experience in providing customer training has shown, time and time again, that technology investments by themselves don’t ensure success.

The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.



The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.

That's really where training comes in. Our education team can help address both the people and process parts of the equation. Increasing the technical skills of our customers' people is often one of the most effective ways for them to grow, increase their productivity and boost the success rates of their virtualization initiatives.

In fact, an interesting study just last year from IDC found that 60 percent of the factors leading to the general success in the IT function are attributed to the skills of people involved. In that regard, in addition to a suite of technical training, we also offer training in service management, project management, business analysis, all with an eye to helping customers improve and integrate their virtualization projects to better processes -- just as Dionne was speaking about a moment ago -- and to better process management.

Of course, we have stable and experienced instructors, whose practical, hands-on expertise provides clients with valuable tips and tricks that they can immediately use when back on the job. So, Dana, you hit it right on the head. It's when all three of those components -- people, process, and technology -- are addressed, especially in virtualization situations, that customers will maximize the business results that they get back from their virtualization solutions.

Gardner: We’ve also seen in the field that, as people embark on virtualization and move from the tactical to the strategic, it forces a rethinking of what it is core and what might be tangential or commoditized.

Ryan Reed, are we seeing folks who, as they explore virtualization, start also to explore their sourcing options? What are some of the trends that you're seeing around that?

Seeing a shift

Reed: Thank you for asking that question. We do see a shift in the way that IT organizations have considered what they think would be strategic to their end business function. A lot of that is driven through the analysis that goes into planning for a virtual server environment.

When doing something like a virtual server environment, the IT organizations have to take a step back and analyze whether or not this is something that they’ve got the core competency to support. Often times, they come to the conclusion that they don’t have the right set of skills, resources, or locations to support those virtual servers in terms of their data-center location, as well as where those resources are sitting.

So, during the planning of virtual server environments, IT organizations will choose to outsource the planning, the implementation, and the ongoing management of that IT infrastructure to companies like HP.

They apply our best practices and our standard offerings that are available to IT organizations from HP data centers or from data centers that are owned by our clients, which would be considered an on-premise type of virtual server environment. Then, they're managed by the IT professionals that Ortega Pittman had mentioned earlier in either an on-shoring or off-shoring scenario, whichever is the best-case scenario for the IT organization that's looking for that skilled expertise.

It's definitely a good opportunity for IT organizations to take a step back and look at how they want to have that IT infrastructure managed, and often times outsourcing is a part of that conversation.

Gardner: It also sounds like that rethinking allows them to focus on the things that are most important to them, their applications, their business logic, and their business processes and look to someone else to handle the plumbing. In the analogy of a utility, somebody else provides electricity, while they build and manage the motors. Is that fair?

Reed: That's a very fair statement. By choosing a partner to team up with to manage that internal plumbing, as you’d referred to it, it allows the IT organization to get back to basics, to understand how to best provide the best-in-class, lowest-cost service to their end users -- increasing business productivity and helping them maximize the return on their IT investment. This powers the business outcomes that their end-users are looking for.

Gardner: I'm intrigued by this notion that these organizations are going to be encountering virtualization sprawl and trying to expand the use of it, but in different ways are they going to be exercising strengths and weaknesses. What are some of the gaps that are typical? What do we usually see now in the field that create a stumbling block to the wider adoption of virtualization?

Pittman: One of the things we observe in the industry is that many customers will start with a kind of phase one of virtualization. They'll consolidate their servers and maybe stop just there. They get that front-end benefit, but that exhausts the internal plumbing that you referred to in a lot of different ways, and can actually cause challenges and complexities that were not in their immediate expectation. So, it's a challenge to think that you're going to start with virtualization and not go beyond the hypervisor.

The starting point

We’d like to work with our customers to understand that it's the starting point to consolidate, but there is a lot more in the broader ecosystem consider, as they think about optimizing their environment.

One of HP’s philosophies is the whole concept of converged infrastructure. That's thinking about the infrastructure more holistically and addressing the applications, as you said, as well as your server environments and not doing one off, but looking more holistically to get the full benefit.

Moving forward, that's something that we certainly could help customers do from an outsourcing standpoint in enabling all of the parts, so there aren’t gaps that cause bigger problems than the one hiccup that started the whole notion of virtualization in the beginning.

Gardner: Does anyone else has some observations from the field about what gaps these organizations are encountering as they try to expand virtualization use?

Clement: One of the good things for our clients is the fact that within HP we have a great deal of experience and knowledge regarding virtualization. Through no fault of their own, many clients don’t understand or don’t realize the breadth or depth of virtualization options and alternatives that are available for them.

We want to make sure that the customers are thinking about this first from the business perspective.



The good news is that we at HP have a wide range of training services, ways that we can work with a client to help them figure out what the best implementation options are for them, and then for us to help them make sure that those options are implemented with excellence and truly do result in the business benefits that they desire.

Gardner: Now that you’ve mentioned some of the strengths that HP is bringing to the table, how do you get those to work in concert? It seems that it's a hurdle for these organizations themselves to look at things holistically? When they go out to a supplier that has so many different strengths and offerings, how do you customize those offerings individually to these organizations. How do they get started?

Morgan: We think about this in terms of their life cycle. We like to start with a strategy discussion, where we have consultants sit down with the client to better understand what they’re trying to accomplish from a business objective perspective. We want to make sure that the customers are thinking about this first from the business perspective. What are their goals? What are they trying to accomplish? And, how can virtualization help them accomplish those goals?

Then, we also can help them with their actual return on investment (ROI) analysis and we have ROI tools that we can use to help them develop that analysis. We have experts to help them with the business justification. We try to take it from a business approach first and then design the right virtualization solution to help them accomplish those goals.

Gardner: It sounds like there's a management element here. As we pointed out a little earlier, IT departments themselves have been divvied up by the type of infrastructure that they were responsible for. That certainly makes a lot of sense, and it follows the development of these different technologies at different times in the past.

Now, we're asking them, as we virtualize, to take an entirely different look, which is more horizontal across this converged infrastructure. Is there a management gap that needs to be filled or at least acknowledged and adjusted to in terms of how IT departments run?

Blurring the connections

Meyer: What it calls into focus is that one thing virtualization does very nicely is blur the connections between the various pieces of infrastructure, and the technology has developed quite a bit to allow that to ebb and flow with the business needs.

And, you're right. The other side of that is getting the people to actually work and plan together. We always talk about virtualization as not an end-point. It's an enabler of technology to get you there.

If you put what we’re talking about in context, the next thing that people want to go to is maybe build a private-cloud service delivery model. Those types of things will depend on that cooperation. It's not just virtualization that that's causing but it's really the newest service delivery models. Where people are heading with their services absolutely requires management and a look at new processes as well.

Gardner: In many cases, that requires a third party of some sort to be involved, at least, to get that management shift or acknowledgment under way.

Which of you can offer an example of how we move to a higher level of virtualization and got those payoffs that people are so enticed by -- that much lower number of servers, lower footprint, lower carbon and energy use, total cost, etc.? Can you provide an example of an organization that's done that and has also bitten the bullet on some of the management issues that allows that economic benefit?

They decided to virtualize, because that would help, of course, with the ability to consolidate and to improve on those service levels.



Morgan: I can give one example. There's an organization called Intrum Justitia, a financial services organization in Europe. We worked with them as they were embarking out their virtualization journey. The challenge they had was that they have multiple organizations and multiple data centers across Europe, and they wanted to consolidate from 40 different locations around Europe into two data centers.

At the same time, they wanted to improve the service level they were providing back to their business. They decided to virtualize, because that would help, of course, with the ability to consolidate and to improve on those service levels.

The way we helped them was by first having that strategy discussion. Then, we helped them design the solution, which included the HP Blade System, VMware software, EVA Storage, as well as other hardware and software products. We went through the full lifecycle with them helping with the strategy and the design.

We helped them build the solution. We managed their project office. We managed the migration from the 40 locations. Then, once everything was transitioned, we were able to help them go on the right path to further managing them. Some of the results were that they were able to manage that consolidation to the twin data centers, and they're beginning to see some of the benefits now.

Gardner: Let me put you on the line. What do you think HP brought to the table in this example that the Intrum wouldn’t be able to find anywhere else?

For more information on HP's Virtual Services, please go to: www.hp.com/go/virtualization and www.hp.com/go/services.

Wide expertise

Morgan: There are a couple of things. One is that we actually have the expertise, not only in the HP products, but also in the software products. We have the expertise, of course, for the Blade Systems and the EVA Storage, but also the expertise around VMware.

So, they had hardware and software expertise from one vendor -- from HP. We also have the expertise across the lifecycle, so they could just come to one place for strategy, design, development, and the ultimate migration and implementation. It's expertise, as well as a comprehensive focused life goal.

Gardner: Are there any other examples of a larger scale, top tier organization that has moved aggressively into virtualization and had a success?

Pittman: Yes, Dana, HP Enterprise Services worked with the Navy/Marine Corps Intranet (NMCI), which is the world’s largest private network, serving and supporting sailors, marines, and civilians in more than 620 locations worldwide.

They were experiencing business challenges in productivity and innovation and in the security areas. Our approach was to consolidate 2,700 physical servers down to 300, reducing outage minutes by almost half. This decreased NMCI’s IT footprint by almost 40 percent and cut carbon emissions by almost 7,000 tons.

We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.



Virtualizing the servers in this environment enabled them to eliminate carbon emissions equivalent to taking 3,600 cars off the road for one year. So, there were tremendous improvements in that area. We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.

All of this was done through the outsourcing virtualization support of HP Enterprise Services and we're really proud that that had a huge impact. They were recognized for an award, as a result of this virtualization improvement, which was pretty outstanding. We talked a little earlier about the broader benefits that customers can expect, the services that help make all of this happen.

In our full portfolio within the IT organization of HP, that would be server management services, data center modernization, network application services, storage services, web hosting services, and network management services. All combined, they made this happen successfully. We're really proud of that, and that's an example of the very large-scale impact that's reaping a lot of benefit.

Gardner: We've talked about how this can scale up, I suppose it's also interesting in the future, as more companies look to virtualization and think about services and infrastructure as a service (IaaS), that this could probably start going down market as well. Does anyone have some thoughts about how a company like HP, perhaps through their outsourcing capabilities, could move somebody’s values into an organization smaller than the Navy and Marines?

Mission-critical systems

Reed: What's interesting about the NMCI is that, as Ortega mentioned, this is a very large complex and mission-critical system. Thousands of servers were virtualized, having a major impact on how the service is being delivered. The missions that are being performed on such an infrastructure are still mission critical. You can't really have a much more impactful implication, because lives actually depend on the successful missions that are performed on this infrastructure.

Now, if you take that and have it scaled to lower level implementations of virtual server environments, the lessons learned, the best practices, the technology, the people, the processes, and the skills are all absolutely relevant, when trying to scale this down to small- and medium-sized businesses.

That's because the standardized procedures for managing this type of infrastructure is documented for our service delivery organizations around the world to take advantage of. They’re repeatable, standardized, and consistently delivered.

Gardner: As we get into the future, and the use of virtualization becomes integral to more companies -- not as an island, but more of the ocean that they are sailing on -- this kind of changes the way the companies function. They'll become more IT services and service management oriented. Perhaps, they'll have more services orientation in terms of their architecture.

Does anyone have any thoughts about where this is going to lead next, if you bite the bullet, become holistically adept at virtualization partnering with companies like HP to use the skills and understanding they have and learn the lessons of the past? What are the next stages or steps? Bob, any thoughts?

Virtualization becomes a foundational element for the next set of service delivery model that people are looking at.



Meyer: We mentioned this in the beginning. Virtualization becomes a foundational element for the next set of service delivery model that people are looking at. So, from an IT provider’s perspective, if you get virtualization right, if you get the converged infrastructure that Ortega was talking about, you get the right mix and close the skill gaps. You get a strong foundation to move on to things like private cloud, and it really opens up your options for different service delivery models.

With this is this notion of pushing out virtualization more broadly, the next step leads you to a good place to build on top of those delivery models and ultimately lower the cost and increase the quality of the services you deliver to the business.

Pittman: You asked how it all fits in moving to the future. Recently, in a Gartner report, there were some key findings. One of the items that was reported was that mid-sized businesses are seeking a much more intimate relationship with IT providers. There is a perception out there that they can have a closer relationship with smaller vendors as opposed the large ones.

[Editor's Note: “The penetration of virtual machines in the market at year-end 2008 was 12%; by year-end 2012, it will be nearly 50%.” Source: Gartner October 7 2009. Research Title: Virtual Machines and Market Share through 20012. Research ID #G00170437.]

One thing I’d like to just put out there for the IT community that may be is thinking about virtualization is that HP offers solutions for small, medium and large organizations. The way we are set up in terms of the account support with our account leaders, we certainly can meet the needs of the small to medium to large menu. We are set up to engage, support, and be that trusted advisor at all three of those levels.

Just to dispel any misconception that "They’re large, and I'm not sure if I'm going to get the attention," we're ready and have the products and services to deliver outcomes that they are looking for at all levels.

Gardner: Sort of "have it your way" opportunity.

Pittman: Exactly.

Expertise and flexibility

Clement: Just to follow on to that point, which I think is a great one. As we've been hearing here, it boils down to expertise and flexibility. Does HP have the expertise strategically to help clients of any size? Do we have the expertise from a service delivery perspective, from an instructor perspective, from a course development perspective? And the answer is, we do.

Do we provide these services, these products, these training classes in a variety of flexible ways and are we willing to tailor these to our clients. The answer, again, is a resounding, yes, we are.

Gardner: I wonder if we could offer some concrete ways to get started. Are there some places people can go, some Google searches they should do, as they are thinking about virtualization and their expansion and their way of managing the risk?

Morgan: There is definitely HP.com. We have many pages on HP.com to talk about virtualization and our virtualization offerings. So, that is the one area. They could also contact their local HP representative. If they work with HP authorized channel partners, they can also have discussions with the channel partners as well.

Meyer: There's a very simple way to find out more about virtualization solutions. You could just type in www.hp.com/go/virtualization, and it will take you to virtualization home page. If specifically you want to find more about services, it's just www.hp.com/go/services. That shortcut will take you right to the very relevant information.

Gardner: Well, very good. We've been here with a panel of HP executives examining the in-depth challenges of moving to large scale successful virtualization adoption. We looked at some of the ways that HP has worked with some customers to help them make that leap successfully. I want to thank our panel today. We've been talking with Tom Clement, worldwide portfolio manager in HP Education Services. Thank you, Tom.

Clement: You're most welcome, Dana. Again, thanks for having me.

Gardner: Bob Meyer, virtualization solutions lead, HP Enterprise Business. Thank you Bob.

Meyer: Thank you.

Gardner: Dionne Morgan, worldwide marketing manager, HP Technology Services. Thank you, Dionne.

Morgan: You're welcome.

Gardner: Ortega Pittman, worldwide product marketing, HP Enterprise Services. Thank you.

Pittman: Thank you for having me.

Gardner: And, Ryan Reed, worldwide marketing manager, HP Enterprise Services.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how to take proper planning, training and management steps to avoid virtualization sprawl and achieve strategic-level benefits. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

You may also be interested in:

Saturday, January 30, 2010

Time to Give Server Virtualization's Twin, Storage Virtualization, a Top Place at IT Efficiency Table

Transcript of a BriefingsDirect podcast on the improved business metrics from adopting a virtualized storage architecture.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on storage virtualization. You've heard a lot about server virtualization over the past few years, and many enterprises have adopted server virtualization to improve their ability to manage runtime workloads and high utilization rates to cut total cost.

But, as a sibling to server virtualization, storage virtualization has some strong benefits of its own, not the least of which is the ability to better support server virtualization and make it more successful.

We're here to discuss how storage virtualization works, where it fits in, and why it makes a lot of sense. The cost savings metrics alone caught me by surprise, making me question why we haven't been talking about storage and server virtualization efforts in the same breath over these past several years.

To help us explain how to better take advantage of storage virtualization, we're joined by Mike Koponen, HP's StorageWorks Worldwide Solutions marketing manager. Hello, Mike.

Mike Koponen: Hello, Dana. How are you doing today?

Gardner: Doing very well. Thanks for joining us.

Koponen: You bet.

Gardner: As I said, a lot of folks have been taking up more server virtualization and understanding its benefits. It's become quite popular, particularly in the down economy, where cost is so important. Storage virtualization offers a number of the same types of benefits. Tell us why storage virtualization makes so much sense.

Economic environment

Koponen: Dana, you mentioned that, particularly in today's economic environment, customers need to boost efficiencies from their existing assets as well as the future assets they're going to acquire and then to look for ways to cut capital and operating expenditures. That's really where storage virtualization fits in.

It's a way to increase asset utilization. It's a way to save on administrative cost, and it's also a way to improve operational efficiencies, as businesses deal with the increasing storage requirements of their businesses. In fact, if businesses don't reevaluate their storage infrastructures at the same time as they're reevaluating their server infrastructures, they really won't realize the full potential of a server virtualization.

Gardner: A few years ago, people were putting in servers as fast as they could. Basically, their goal or their motivation was simply to keep up with demand. I have to believe that's the case with storage as well, and storage requirements are still growing rapidly. How do you both keep up with the high demand for more and try to cut cost at the same time.

Koponen: It's an excellent question and one that businesses deal with all the time. As you say, the storage requirements aren’t letting up from regulatory requirements, expansion, 24x7 business environments, and the explosion of multimedia. Storage growth is certainly not stopping due to a slowed down economy.

Storage virtualization and server virtualization are tools that businesses are using to deal with those. In the past, as you said, customers would just continue to deploy servers with direct-attached storage (DAS). All of a sudden, they ended up with silos or islands of storage that were more complex to manage and didn't have the agility that you would need to shift storage resources around from application to application.

Then, people moved into deploying network storage or shared storage, storage area networks (SANs) or network-attached storage (NAS) systems and realized a gain in efficiency from that. But, the same can happen. You can end up with islands of SAN systems or NAS systems. Then, to bump things up to the next level of asset utilization, network storage virtualization comes into play.

You can pool all those heterogeneous systems under one common management environment to make it easy to manage and provision these islands of storage that you wound up with.

Gardner: You mentioned this notion of silos of storage and I think I heard at least two or three different levels of silos of storage. Can you break that out for us? What are we really talking about, when we think about the various components at play here?

Three levels

Koponen: I break it down into three levels. One, I'd call basic virtualization. That's where you just have internal storage in your servers or direct attached storage to those servers. The next level would be what I'd call virtualized network storage. We've got SAN systems that have the ability to virtualize the arrays and the disk spindles within that SAN system.

The third level is what I call network-based storage virtualization. There, you have the ability for heterogeneous storage systems to all be managed under a common structure and virtualized as a single common pool of storage. Those would be the three levels that I break them down into.

Gardner: So, the goal with storage virtualization is not just to virtualize on each of those levels, but to virtualize them all together, so there is a single pool of storage. Is that correct, or that I am oversimplifying?

Koponen: No, that's basically it. In the second two levels I described, where you've got a SAN system, those can also come in two types. You can have a traditional one that's non-virtualized and then you can have a virtualized one, such as the HP Enterprise Virtual Array or the HP LeftHand SAN, where you have the ability to stripe data out across disk spindles and multiple drive trays, and all of that is abstracted from the system administrator.

There are different needs or requirements that drive the use of storage virtualization and also different benefits.



The storage is virtualized, and then the level above that is where you have network-based storage virtualization, such as our SAN virtualization services platform, that can take heterogeneous storage systems, multiple SAN systems for multiple vendors, and present those as one common pool of storage. It's this concept of pooling storage, but at different levels.

Gardner: Of course, it's a big management task to be able to do that and then get to the storage the way that you want to prioritize different storage requirements and some responses based on the application set or whether you're doing it for backup or archives. Is that right?

Koponen: That's true. There are different needs or requirements that drive the use of storage virtualization and also different benefits. You mentioned some of them. It may be flexible allocation of tiered storage, so you can move data to different tiers of storage based upon its importance and upon how fast you want to access it. You can take less business-critical information that you need to access less frequently and put it on lower cost storage.

The other might be that you just need more efficient snap-shotting, a replication of things, to provide the right degree of data protection to your business. It's a function of understanding what the top business needs are and then finding the right type of storage virtualization that matches those.

Gardner: It also sounds like we're taking a complete look at storage. We're looking at it from all angles and, therefore, are able to architect in such a way that we can take advantage of all the capacity we have and do that intelligently. Is that a fair assumption?

Key driver

Koponen: That's true. One key driver is boosting asset utilization. We found that in a lot of businesses they may have as little as 20 percent utilization of their storage capacity. By going to storage virtualization, they can have a 300 percent increase in that existing storage asset utilization, depending upon how it's implemented.

Gardner: Mike, tell me how this relates to server virtualization. If I've got a server virtualization program underway and I've enjoyed some benefits from that, what is taking this added step to storage virtualization going to do for me?

Koponen: Well, a couple of things, Dana. First, in order to take advantage of the advanced capabilities of server virtualization, such as being able to do live migration of virtual machines and to put in place high availability infrastructures, advanced server virtualization require some form of shared storage.

So, in some sense, it's a base requirement that you need shared storage. But, what we've experienced is that, when you do server virtualization, it places some unique requirements on your storage infrastructure in terms of high availability and performance loads.

Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.



Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.

And, there's manageability too. Virtual server environments are extremely flexible. It's much easier to deploy new applications. You need a storage infrastructure that is equally as easy to manage, so that you can provision new storage just as quickly as you can provision new servers.

Gardner: So, is this a case of a whole being greater than the sum of the parts? If we do server virtualization well and then we do storage virtualization well, not only do we get the usual benefits in terms of capacity, cost, flexibility, and intelligence at each of those perspectives, but, by combining them, we get something additional.

Koponen: Yes, you certainly do. The way I would describe that X factor of what you're getting in addition is just the highest level of business agility and flexibility. The underpinning of that would be that you're making maximum use of your assets, both your server assets and your storage assets.

Gardner: Is there something else here in terms of security, compliance, complexity, or those other necessary things to deal with nowadays? Do we get anything else in combining these two?

Increased protection

Koponen: You certainly get an increased degree of data protection by being able to meet backup windows and not having to compromise the amount of information you back up, because you're trying to squeeze more backups through a limited number of physical servers. When you do server virtualization, you're reducing the number of physical servers and running more virtual ones on top of that reduced number.

You might be trying to move same number of backups through a fewer number of physical servers. You also then end up with this higher degree of data protection, because with a virtualized server storage environment you can still achieve the volume of backups you need in a shorter window.

Gardner: So, it's better control, better understanding, higher utilization, and lower cost. If someone is interested after hearing this, where do you start, how do you undertake a journey? I assume you don't do this all at once, but rather it's something you need to do on a rollout basis. Where do you start when it comes to storage virtualization?

Koponen: Step one is assessing your environment and understanding what your starting point is going to be. Is it a greenfield environment, where you've got a lot of departmental, work-group type servers that you don't have tied into shared storage or virtualized storage? It might be starting with putting in place virtualized storage to support those.

You're more exposed now to that single physical server going down, because, if that single physical server goes down, you've lost multiple applications, and not just one.



Or, do you have existing SAN systems in place that are just underutilized. Then, you might look at putting in place, say, the HP SAN Virtualization Services Platform (SVSP), to get a higher degree of asset utilization out of the existing systems.

It depends on where you're starting from. So, step one is to determine that, figure out where your most underutilized assets are, and what's causing you the most pain today from a management complexity standpoint. Or, it could be the case that you don't have an adequate business continuity plan in place. That's your key factor in where to start. So, it's assessing that starting point, Dana.

Gardner: Let's drill into that business continuity one for a second. That's pretty important. What does virtualizing your storage bring to the table, when it comes to data recovery, disaster recovery, backup, archiving, or continuity issues?

Koponen: Well, first, when you virtualize your servers, you're taking multiple applications and running them on a single physical server. You're more exposed now to that single physical server going down, because, if that single physical server goes down, you've lost multiple applications, and not just one. So, the need for high availability goes up.

Server virtualization suppliers like VMware, Microsoft, and Citrix, all have capabilities to provide high availability on the application side. You need to make sure you match that with high availability on the storage infrastructure side, so that you've got the same capabilities within your storage from a high availability standpoint as you do your sever infrastructure.

Gardner: Right, it doesn't make sense to have the applications humming along at whatever requirements are, if the storage and data can't keep up.

High application availability

Koponen: Exactly. From an HP portfolio standpoint, we have some innovative products like the HP LeftHand SAN system that's based on a clustered storage architecture, where data is striped across the arrays and the cluster. If a single array goes down in the cluster, the volume is still online and available to your virtual server environment, so that high degree of application availability is maintained.

Gardner: Mike, how about some examples? For folks that have done this already, what are the typical scenarios? What are some of the paybacks? What's the usual case scenario?

Koponen: Dana, there was a white paper recently done by IDC on the business value of storage virtualization. It looked at a number of factors -- reduced IT labor, reduced hardware and software cost, reduced infrastructure cost, and user productivity improvements. Virtualized storage had a range of payback anywhere from four to six months, based on the type of virtualized storage that was being deployed.

It found asset utilization increases up to 300 percent, savings of administrative cost of 2x to 3x, and shrinking back-up times by up to 80 percent as well. The benefit in the payback was really compelling. That IDC paper is posted on the HP website.

Virtualized storage had a range of payback anywhere from four to six months, based on the type of virtualized storage that was being deployed.



Gardner: What are some of the business returns? Clearly, we've got some cost benefits and technology benefits that folks in the IT department would enjoy, but what would we expect from storage virtualization for the larger business outcomes or goals?

Koponen: You have these benefits of reduced CapEx and OpEx that companies can take to the bottom line, particularly in these economic times, and you also have improved business agility as well. Let's say, a company makes an acquisition and they've got to merge an existing IT resource into their existing IT infrastructure. The ability to do that is going to be greater, given that you've got a virtualized storage infrastructure in place.

Gardner: That gives more agility for changing your organization from a merger and acquisition perspective. How about sourcing, when it comes to what we think about now as cloud computing? Is there some benefit in having virtualized storage that gives you more options for your sourcing?

Koponen: Well, it gives you more options in terms of the flexibility with which you manage your internal cloud, how you can meet your quality of service levels to your internal user community, and how you partition out storage to them, because you can make much more efficient use of those resources. Using your outsourced cloud providers, you can augment that existing server and storage capacity that you've got in place.

Gardner: Returning to the road map of how you would get started and involved with this, what else do you need to consider other than certain technologies? Is this something that's going to change the nature of your organization? Are we going to be asking different folks inside of IT, and perhaps outside of IT, to work together in ways they hadn't before? What are the cultural implications?

Combining resources

Koponen: That's a good question, Dana. In the case of enterprise organizations, you may have the storage management done by a particular set of folks. Then, you may have the management of it, the server infrastructure, done by another set of folks. This will bring those two sets of resources together and provide them more efficient platform to be able to work together.

In medium-sized businesses, it's all about being able to manage storage assets without having to have expert storage administrators in place, so that server administrators can manage the storage assets as easily as they do their server assets.

Gardner: I wonder, if there's something we left out. Is there another item around this that folks should be aware of?

Koponen: I don't think so. However, for people who want to learn more about storage virtualization and what HP has to offer to improve their business returns, I suggest, they go to www.hp.com/go/storagevirtualization. There they can learn about the different types of storage virtualization technologies available. There are also some assets on that website to help them with the justification of putting storage virtualization within their companies.

HP has very strong relationships with all of the server virtualization suppliers in the marketplace, so that we can bring complete solutions to bear on customers.



Gardner: Just to be clear. When HP approaches storage virtualization, you're working with a number of different vendors and suppliers and different technologies. This is really quite a heterogeneous landscape. Is that correct?

Koponen: That's correct. HP has very strong relationships with all of the server virtualization suppliers in the marketplace, so that we can bring complete solutions to bear on customers. We use best-of-breed technology from the entire ecosystem.

Gardner: Well, thanks. We've been hearing about server virtualization in the past few years, but today, we've taken the time to look at a sibling, storage virtualization. It involves improved runtime workloads in the server side, getting the storage and data that it needs, but there is also a lot of pure, economic rationale for going about the storage virtualization on its own.

Here to help us better understand the whys and hows of storage virtualization is Mike Koponen. He's the HP StorageWorks Worldwide Solutions Marketing Manager. Thanks for your time, Mike.

Koponen: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the improved business metrics from adopting a virtualized storage architecture. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.