Tuesday, July 14, 2009

Rethinking Virtualization: Why Enterprises Need a Sustainable Virtualization Strategy Over Hodge-Podge Approaches

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on rethinking virtualization. We’ll look at a series of three important considerations when moving to enterprise virtualization adoption.

First, we'll investigate the ability to manage and control how interconnections impact virtualization. Interconnections play a large role in allowing physical servers to support multiple virtual servers, which themselves need multiple network connections. The connections themselves can be virtualized, and we are going to learn how HP Virtual Connect is being used to solve these problems.

Second, we're going to examine the role and importance of configuration management databases (CMDBs) in deploying virtualized servers in production. When we scale virtualized instances of servers, we need to think about centralized configuration, it really helps in bringing management to this crucial part of preventing server sprawl and an unwieldy complexity that can often impact the cost of virtualization projects.

Last, we're going to dig into how outsourcing in a variety of different forms, configurations, and values could help organizations get the most bang for their virtualization buck. That is to say, how they think about virtualization not only in terms of placement, but also in where that data center and even hybrid data centers will be residing and managed.

Here to help us to dig into these essential ingredients of successful and cost-effective virtualization initiatives, are three executives from Hewlett-Packard (HP).

When we scale virtualized instances of servers, we need to think about centralized configuration

We're going to be speaking with Michael Kendall, worldwide Virtual Connect marketing lead. We're also going to be joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions. And last, we're going to discuss outsourcing with Ryan Reed, a product manager for EDS Server Management Services.

First, I want to talk a little bit about how organizations are moving to virtualization. We certainly have seen a lot of the "ready, set, go," but when organizations start looking at the complexity, when they think about scale, when they think about the need to do virtualization for the economic pay-off, rather than simply moving one shell around from physical to virtual, or from on-premises to off-premises, the complexity in the issue starts to sink in.

Let me take our first question to Shay Mowlem. Shay, what is it that we're seeing in terms of how companies can make sure that they get a pay-off economically from this, and that it doesn’t become complexity-for-complexity's sake?

Shay Mowlem: The allure of virtualization is quite great. Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

The fact that virtualized infrastructure has more interdependencies means there’s more of a risk profile because of the services that are supported. The real challenge for those companies is putting in place the right management platform in order to be able to truly recognize those gains for those production environments.

Gardner: So, when we talk about rethinking virtualization, I suppose that it really means planning and anticipating how this is going to impact the organization and how they can scale this out?

Mowlem: Yeah. That’s exactly right.

Looking at connections

Gardner: First, we're going to look at the connections, some of the details in making physical servers become virtual servers, and how that works across the network. Mike Kendall is here to tell us about HP’s Virtual Connect technology.

It’s designed to help bridge the gap between the physical world and virtual world, when it comes to the actual nitty-gritty of making networks behave in conjunction with increased numbers of virtualized server instances. This is important when we start rethinking virtualization in terms of actually getting an economic payback from the investments and the expectations that enterprises are now supporting around virtualized activities.

So, let me take it to you Mike. When we go to virtualized infrastructures from traditional physical ones, what’s different about migrating when it comes to these network connections?

Michael Kendall: There are a couple of things. When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

Interestingly, people have found that as you do that, it has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need to accommodate all that extra workflow that’s going on that sever.

So, just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs. But, even when you move it, you have to take down and re-setup those particular network connections. Being able to do that in a way that is harmonious is more challenging within a virtual machine environment.

Gardner: So, it’s not quite as easy as simply managing the hypervisor. We have to start thinking about managing the network. Perhaps you could tell us more about how the Virtual Connect product itself does that.

Basic rethinking


Kendall: Absolutely. Virtual Connect is a great example to follow. HP helps you achieve the full potential you get out of setting up virtual machines on a server and being able to consolidate all those workloads.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Rather than taking hours, days, or even weeks to get a move set up, by either setting up, adding to or moving virtual machines or physical machines, we're able to take that down literally to minutes. The result is that most deployments or moves can be accomplished a whole lot faster.

Another part of this is our new Flex-10 technology. That takes a 10-gigabit Ethernet connection and allocates that across four NIC connections. This eliminates the need for additional physical NICs in the forms of mezzanine cards or stand-up cards, additional cables, or additional switches, when setting up all of the extra connections required for virtual machines.

The average hypervisor is looking for anywhere from three to six NIC connections, and approximately two storage network connections.

If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

If you add that all up, that can be up to a total of six to eight NICs, along with the associated cables and switch ports. The same thing is true with the two storage network connections as well.

With Flex-10, on an average two-port NIC, you can have each one of those ports be able to leave four NICs for a total of eight, without having to add any additional stand-up cards, any additional switches, or the cables with it. As a result, from a cost standpoint, you can save up to 66 percent in additional network equipment cost over competing technology. So, with Virtual Connect you can wire everything once and then add, replace, or recover servers a whole lot faster.

Gardner: And, of course, not doing this in advance would erode your ability to save when it comes to these more utilized server instances.

Kendall: That’s also correct. If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

Gardner: One of the things that folks like about virtualization is an automated approach to firing off instances of servers to support an application -- for example, a database. Does that automated elasticity of generating additional server instances follow through with the Virtual Connect technology so that it’s, in a sense, seamless.

Seamless technology

Kendall: I'm glad you added in the Virtual Connect part, because if you had said "using standard switch technology," the answer to that would be no.

With standardized switch technology and standardized NIC and storage area network (SAN) HBA technology, you generally have to set up all these connections individually. Then, you have to manage them individually. Then, if you set up, add to, or migrate virtual machine instances from the virtual machine (VM) side of it, you can automate a lot of that through a hypervisor manager, but that does not extend to the attributes of the actual server connection, or the virtual machine connection.

Virtual Connect, because it does virtualize those instances in a way that you manage them, makes it very straightforward to migrate the server connections and their profiles, not only with the movement of virtual machines, but also the movement of whole hypervisors across physical machines as well. It extends the physical and the virtual, and handles the automation and the migration of all those connection profiles.

Gardner: So, we're gaining some speed here. We’re gaining mobility. We're able to maintain our cost efficiencies from the virtualization, because of our better management of these network issues, but don’t such technologies as soft switches pretty much accomplish the same thing?

Kendall: Soft switches can be an important part of the infrastructure you put together around virtual machines. One of the things about soft switches is that it’s really important how you use them. If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network. If you use Virtual Connect, which is based upon industry-standard protocols together with a soft switch operating in a simple pass-through type of mode, then you don’t have the latency problem. You maintain the flexibility of Virtual Connect.

The other thing you need to be careful of is that some of the new soft switches out there use proprietary protocol extensions to accomplish the ability to

If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network.

track the movement of the virtual machine, along with its associated connection protocol. These proprietary protocol extensions sometimes require upstream products that can accept these protocol extensions and require new hardware, switches, and management tools. That can add a lot to the cost to upgrading an infrastructure.

Gardner: Thank you Michael. We’re now going to look at another important issue around virtualization, and that is configuration and management. This has become quite an issue in terms of complexity. Managing the physical servers, when we get into the large numbers, is, in itself, complex. When we add virtualization and dynamic provisioning and look to recover cost from energy and utilization, we add yet another dimension to the complexity.

We’re going back to Shay Mowlem. We’re going to talk a little bit about this notion of data collection, management, configuration, and automation along this line. So, we'll talk about how visibility into the requirements of what’s going on in the virtualization instances, data centers, and across the infrastructure becomes critical. How are companies gaining better visibility across the virtualized data center, compared to what they were perhaps doing to the purely physical ones?

Mowlem: IT infrastructures really are becoming more ambiguous. With the addition of virtual machines to data centers that are already leveraging other virtualization technologies in their storage area networks -- virtual LANs and so on -- all of that makes knowing where a problem exists much harder to identify and fix. That has an impact on management cost and service quality.

Proof for the business

For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost, as they incorporate virtualized servers and storage to support the systems. We've seen that the ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

It also needs to include the ability to monitor the health of the infrastructure, but also from the perspective of the business service. In other words, be able to monitor and understand all of the infrastructure elements, how they relate to one another, servers, networked storage, and then also be able to monitor the health and the performance of the service from the perspective of the business user.

It's sort of a bottom-up and top-down view if you will, and this is an area that HP Software has invested in very heavily. We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis.

We also track what has changed over time, what was implemented, and who made those changes. Then, we can leverage that information very carefully to solve important questions with regards to how a particular service has been behaving over time?

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

This can be quite profound. We found that through an return on investment (ROI) model that we worked on, based on data from IDC, that effective utilization of HP’s Discovery and Dependency Mapping technology and storing that in a central UCMDB, on average can help reduce the mean time to repair of outages by 76 percent, which is a massive benefit through effective consolidation of this important data.

Gardner: Maybe I made a mistake that other people commonly make, which is to think of managing virtualized instances as separate and different. But, I suppose virtualization nowadays is becoming like any other system across the IT infrastructure.

Mowlem: Absolutely. It’s part of a mix of tools and capabilities that IT has that, in production environments, are ultimately there to support the business. Having an understanding of and being able to monitor all these systems, understanding their interdependencies, and managing them in an integrated way with the understanding of that business outcome, is a key part of how companies will be able to truly recognize the value that virtualization has to offer.

Gardner: Okay, I think we understand the problems around this management issue in trying to scale and bring it into a similar way in which the entire data center is managed. What about the solutions? What particularly didn’t organizations consider when approaching this total configuration issue?

Business service management

Mowlem: We offer a host of solutions that help companies manage virtualized environments end to end, but as we look at monitoring -- and essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time -- we talk about the business service management portfolio of HP Software. This includes the Discovery and Dependency Mapping product that I talked about earlier. UCMDB is a central repository, and a number of tools allow our customers to monitor their infrastructure at the server level, at the network level, but also at the service level, to ensure ongoing health and performance of their environment.

Gardner: You mentioned these ROI figures. Typically, is there any comparison to how organizations will start down the virtualization path? How they can then begin to recover more cost and cut their total cost by adopting some of these solutions?

Mowlem: We offer a very broad portfolio of solutions today that manage many different aspects of virtualization, from testing to ensuring that the performance of a virtualized environment in fact meets the business service level agreements (SLAs). We talked about monitor already. We have automation as part of our portfolio to achieve efficiency in provisioning and change execution. We have a solution to manage assets, so that software licenses are tracked carefully and properly.

We also have a market-leading solution in backup recovery with our Data Protector offering to help customers scale their backup and recovery capabilities across their virtualized

We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

servers. What we’ve found in the course of our discussions is that there are many customers that recognize that all of these are critical and important areas for them to be able to effectively incorporate virtualization into their production environments.

But, generally there are one or two very significant pain areas. It might be the inability to monitor all of their servers -- physical and virtual -- through one single pane of glass, or it maybe related to compliance enforcement, because there are so many different elements out there. So, the answer isn’t always the same. We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

Gardner: Well, I suppose it’s never too late to begin. If you’re even partially into a virtualization initiative, or maybe even deep in and you’re starting to having problems, there are ways in which you can bring in management features at any particular point in that maturity.

Mowlem: We definitely support a very modular offering that allows people to focus on where they’re feeling the biggest pain first, and then expand from there as it makes sense to them.

Gardner: Let’s now move over to Ryan Reed at EDS. As organizations get in deeper with virtualization and as they consider on a larger scale their plans for their modernization and consolidation and overall cost efficiency of their resources, how do they approach this problem of placement? It seems that when you move towards virtualization it almost forces you to think about your data center in a more holistic and long-term and strategic perspective.

Raising questions

Ryan Reed: Right, Dana. For, a lot of companies when they consider large-scale virtualization and modernization projects, it often raises questions that help them to devise the plan and devise strategy around how they’re going to create a virtual infrastructure and where their infrastructure is going to be located.

Some of the questions that I see are around the physical data center itself. Is the data center itself meeting the needs of the business? Is it designed in a way that can be built for resiliency and provide the greatest value to the business services?

You’ll also find that a lot of times that’s not the case nowadays for the data centers that were built 10 or 15 years ago. Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Another question that often comes up is around the storage network infrastructures. Where they are located physically. Are they in the right place? Is it available at the right times? A lot of organizations may be required by legislative regulatory requirements to keep their data within a particular state or country, national boundaries, or region. A lot of the times, when people are planning for virtual server infrastructures, that comes to be a pretty prominent discussion.

Another one would be around internal skill sets of the enterprise. Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

Lastly, a lot of companies, when they’re doing virtualization projects, start to question, whether or not all of the activities around managing the infrastructure is actually core to their business. If it’s not core to their business, then maybe this is something that they don’t have to be doing themselves anymore.

Taking all that into consideration helps to drive a conversation around planning and being able to create the right type of process. Often times, it leads to a discussion around outsourcing. EDS, which is an HP company, does provide organizations and enterprises for full IT management, and IT infrastructure management. That includes everything from the implementation to ongoing management of virtual as well as non-virtual infrastructure environments.

Client data center or on-premises you called it, Dana, is an option that is available for a lot of enterprises out there that have already invested heavily into their current data-center facility, as well as the infrastructure. They don’t want to necessarily move it to an outsourcer supplied data center. So on-premises is a business model that’s available and becoming common for some of the larger virtualization projects.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So, they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase.

Making the most sense

This makes the most sense for these types of organizations, because you’re going to be doing a migration from physical to virtual anyway. So, you might as well take advantage of the skills that are available from the outsourcing services provider to move that to their data center, and have them apply best-in-breed practices and technology to manage that infrastructure.

Then you also mentioned what would be considered like a hybrid model, which would be one where virtual infrastructure and non-virtual infrastructure can be managed from either client or organization-owned data center, or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Gardner: Is there anything about virtualizing your data center and more and more servers that makes outsourcing perhaps easier or an option that some people that hadn’t considered in the past and should?

Reed: Sure. Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership (TCO).

Doing the actual implementation, the ongoing management of the operating systems, both virtual and non-virtual for guests and hosts, patching of the system,

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

monitoring to make sure that the systems are up and running responding to events, escalating events, and then doing things like backup and restore activities of the systems are really core to an outsourcing services provider’s business. That’s what they do.

We don’t expect our clients to have the same level of expertise as EDS does. We’ve been doing this for 45 years, and it’s really the critical piece of what we do. So, there are many things to consider when choosing an outsourcing provider, if that’s the way to go. Benefits can range dramatically from reducing your TCO to increasing levels of availability within the infrastructure, and then also being able to expand and use the services provider, global delivery service centers that are available around the world.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

Additionally, you can take advantage of things like low-cost delivery centers that the services provider has built up over the years -- services centers that are from low-cost regions. EDS considers this to be the best strategy. Having resources available in low-cost countries to provide the greatest value to clients is important when it comes to understanding the best approach to selecting a good services provider.

Gardner: So, for those organizations that are looking at these various options for sourcing, how do they get started? What’s a good way to begin that cost benefit analysis?

Reed: Well, there’s information available through the eds.com website. Go there and search on "virtualization" and you’ll find the first search result that comes back that has lots of information around what to expect in terms of an engagement, as well as examples of where virtualization has been done with other organizations similar to what a lot of industries are facing out there.

You can see a comparison of like-for-like scenarios to determine whether or not a client engagement would make sense here, based on the case studies and success stories that are available out there as well. There are also industry tools that are available from our partner organizations. HP has tools available. VMR has tools available to help our clients understand where savings can come from. And, of course, EDS is also available to provide those types of services for our clients too.

Gardner: Okay. We’ve been looking at three important angles to consider when moving to virtualization, being aware at a detail level how the network, interfaces and connects work, moving towards more virtualized approach to interconnects. We also looked at the management issues -- configuration not only in the terms of how virtualized servers stand alone. They need to be managed, but managed in total, in terms of the part of the larger IT mix. We also looked at how to consider some different options in terms of cost and skills, availability of resources, energy cost, and general track record of being competent and proven with virtualization in terms of various sourcing options.

I want to thank our three guests today. We’ve been joined by Michael Kendall, worldwide Virtual Connect marketing lead at HP. We've been joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions, and Ryan Reed, product manager for EDS Server Management Services.

This is Dana Gardner, principal analyst at Interarbor Solutions, we want also to thank the sponsor of our podcast discussion today, Hewlett-Packard, for underwriting its production. Thanks for listening and come back next time.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Download a pdf of this transcript.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

No comments:

Post a Comment