Wednesday, July 15, 2009

Panda's SaaS-Based PC Security Manages Client Risks, Adds Efficiency for SMBs and Providers

Transcript of a BriefingsDirect podcast on security as a service and cloud-based anti-virus protection and business models.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on automating and improving how PC security can be delivered as a service. We'll discuss how the use of cloud-based anti-virus and security protection services are on the rise, and how small to medium-size businesses (SMB) can find great value in the software-as-a-service (SaaS) approach to manage PC support.

We'll also examine how the use of Internet-delivered security provides a strong business opportunity for resellers and channel providers to the businesses trying to protect all of their PCs, regardless of location.

Recent announcements by Panda Security for cloud-based PC anti-virus tools, as well as a Managed Office Protection solution, highlight how "security as a service" is growing in importance and efficiency.

Here to help us better understand how cloud-delivered security tools can improve how PCs are protected across the spectrum of end users, businesses, resellers, and managed-service providers, we're joined by Phil Wainewright, independent analyst, director of Procullux Ventures, and a ZDNet SaaS blogger. Welcome back to the show, Phil.

Phil Wainewright: It's great to be here, Dana.

Gardner: We're also joined by Josu Franco, director of the Business Customer Unit at Panda Security. Welcome to the show, Josu.

Josu Franco: Hello, Dana. Nice to be here.

Gardner: Let's start, Josu, with looking at the big picture. The general state of PC security, the SaaS model, and the dire economy are, for many organizations, conspiring to make a cloud-based solution more appropriate, perhaps now more than ever. Tell us why a cloud-based solution approach to PC security is a timely approach to this problem.

Franco: There are two basic problems that we're trying to solve here, problems which have increased lately. One is the level of cyber crime. There are lots and lots of new attacks coming out every day. We're seeing more and more malware come into our labs. On any given day, we're seeing approximately 30,000 new malware samples that we didn't know about the day before. That's one of the problems.

The second problem that we're trying to solve for companies is the complexity of managing the security. You have systems with more mobility. You have vectors for attack -- in other words, ways in which a system can be infected. If you combine that with the usage of more and more devices in the networks, that combination makes it very difficult for administrators to really be on top of the security mechanisms they need to watch.

In order to address the first problem, the levels of cyber crime, we believe that the best approach that we, as an industry, need to take is an approach that is sustainable over time. We need to be able to address these rising levels of malware in the future. We found the best approach is to move processing power into the cloud. In other words, we need to be able to process more and more malware automatically in our labs. That's the part of cloud computing that we're doing.

In order to address the second problem, we believe that the best approach for most companies is via management solutions that are easier to administer, more convenient, and less costly for the administrators and for the companies.

Centralized approach

Gardner: Now, Phil, we've seen this approach of moving out toward the Web for services -- the more centralized approach to a single instance of an application, the ability to manage complexity better through a centralized cloud-based approach across other applications. It seems like a natural evolution to have PC security now move to a SaaS model. Does that make sense from your observations?

Wainewright: It certainly does. To be honest, I've never really understood why people wanted to tackle Web-based malware in an on-premise model, because it just doesn't make any sense at all.

The attacks are coming from the Web. The intelligence about the attacks obviously needs to be centralized in the Web. It needs to be gathering information about what's happening to clients and to instances all around the Web, and across the globe these days. To have some kind of batch process, whereby your malware protection on your PC is something that gets updated every week or even everyday, is just not fast enough, because the malware attacks are going to take advantage of those times when your protection is not up-to-date.

Really making sure that the protection is up-to-date with the latest intelligence and is able to react quickly to new threats as they appear means that you've go to have that managed in the center, and the central management has got to be able to update the PCs and other devices around the edge, as soon as they've got new information.

Gardner: So, the architectural approach of moving more back to the cloud, where it probably belongs, at least certainly from an architectural and a timeliness or a real-time reaction perspective, makes great sense. But, in doing this, we're also offloading a tremendous burden from the client in terms of these large agents, tremendous demand on the processing of this client, the need to move large files around, drag on the networks, labor for moving around the organization, and physically getting to these machines. It seems almost blatantly obvious that we need to change this model. Do you agree, Josu?

Franco: I do. One point that I want to make, though, is that when we refer to SaaS, we use the term to refer to the management console of the security solutions. So, SaaS for us is an interface for the administrator, it’s an interface obviously based on the Web.

When we refer to cloud computing, it refers to our capacity to process larger and larger volumes of malware automatically, so that our users are going to be better protected. Ideally, cloud computing and SaaS should be going together, but that's going to take a little bit of time, although, in our case at least, all of our solutions align into those two concepts. We've been moving towards that. The latest announcements that we've made about this product for consumers go certainly into that direction.

I just want to make clear that SaaS for me is one thing. Cloud computing is a different thing. They need to work together, but as a concept we should not confuse the terms.

Wainewright: That's very important, Dana. One of the key things that people misunderstand about notions of cloud computing and SaaS is this idea that everything gets sucked up into the network and you don't do anything on the client anymore.

That's actually a rather primitive way of looking at the SaaS and cloud spectrum, because the client itself is part of the cloud. It's a device that interacts with other peers in the Web environment, and it's got processing power and local resources that you need to take advantage of.

The key thing is striking the right balance between what you do on the client and what you do in the cloud, and also being cognizant of where people are at in terms of their overall installed infrastructure and what works best in terms of what they've got at the moment and what their roadmap is for future migration.

Separating SaaS and cloud

Gardner: I see. So, we do need to separate SaaS and cloud. We need to recognize that this is a balance and not necessarily an all-or-nothing approach -- neither all-cloud nor all-client. This seems to fit particularly well into the demands of an SMB, a distributed business, or perhaps even a multi-level marketing (MLM) company, where there are people working at home, on the road, in remote offices, and it's very difficult for the administrators or the managed providers or resellers to get at these machines. Moving more of that balance towards the cloud is our architectural goal.

Let's move to the actual technical solution here. Josu, you described some new products. Clearly, there's still an agent involved, coming down to the PC. I wonder if you could describe some of the two big announcements you've had, one around this consumer security cloud service, and then the second around your Managed Office Protection solution.

Franco: The announcement that we've made about the Cloud Antivirus, is a very important announcement for us, because we've been working on this for a couple of years now, and this involves rebuilding the endpoint agent from scratch.

We saw the opportunity, or, I would say, the necessity of building a much lighter agent, much faster than previous agents, and, very importantly, an agent that is able to leverage the cloud computing capacity that we have, which we call "Collective Intelligence," to process malware automatically.

As I said before, this aligns with our technology vision, which is basically these three ideas: cloud computing or collective intelligence, as we call it, regarding the capacity to process

We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

malware; SaaS as the approach that we want to take for managing our security solutions; and third, nano-architecture as the new endpoint architecture, in which we want to base all of our endpoint based solutions.

So, Cloud Antivirus is a very tiny, very fast agent that sits on the endpoint and is going to protect you by some level of local intelligence. I want to stress the fact that we don't see the agents disappearing anytime soon to protect the endpoints. We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

This works by connecting with our infrastructure and asking for file determinations, when the local agent doesn't know about a particular file that it needs to inspect.

The second announcement is more than an announcement. Panda Managed Office Protection is a solution that we've been selling for some time now, and is working very well. It works by having this endpoint agent locally in every desktop or PC or laptop. Once you've downloaded this agent, which works transparently for the end user, all the management takes place via SaaS.

It's a management console that's hosted from our infrastructure, in which any admin, regardless of where they are, can manage any number of computers, regardless of where they are located. This works by having every agent talk to this infrastructure via Internet, and to talk to other agents, which might be installed in the same network, distributing updates or distributing other types of polices.

Gardner: Now, an interesting and innovative approach here is that you've made the Cloud Antivirus agent free to consumers, which should allow them to get protection for virtually nothing, but in doing so you've increased the network population for what you can do to gather instances of problems. The agent immediately sends that back to your central cloud processing, which can then create the fix and then deliver it back out. Is that oversimplifying it?

Staying better protected


Franco: That's a very true statement. We're not the first ones giving away a security agent for free. There are some other companies that I think are using the Freemium model. We've just released this very first version of Cloud Antivirus. We're distributing it for free with the idea that first we want people to know about it. We want people to use it, but very importantly, the more people that are using it, the better protected they're all going to be. As you say, we're going to be gathering intelligence about the malware that's hitting the streets and we're going to able to process that faster and to protect all those users in real-time.

Gardner: Phil, this strikes me as Pandora opening the box. I can't imagine us going back meaningfully in the marketplace to the older methods in architecture for security. Do you agree with me that this is a compelling shift in the market?

Wainewright: It is, obviously. We're talking about network scale here. The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware that effectively lurk on devices around the Web, and are called into action to coordinate attacks.

You've got these malware providers using the collective intelligence of the Web, and if the good guys don't use the same arsenal, then they're just going to be left behind.

The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware



I think the other thing that’s great about this Freemium model is that, even though the users aren't paying anything for the software, in effect they're giving something back, because the intelligence that's being collected is making the potential protection stronger. So, it's a great demonstration of how you can derive value even from something that is actually distributed for free.

Gardner: Sort of all for one, one for all?

Wainewright: Yes, that's right.

Gardner: So, if this works well for security, it strikes me that this also makes a great deal of sense for remediation, general support, patches, upgrades, or managing custom applications. It certainly seems to me that crossing the Rubicon, if you will, into security from a cloud perspective will open up an opportunity for doing much, much more across the general total cost of ownership equation for PCs. Is that in your future? Do you subscribe to that vision, Josu?

Franco: Yes, I do. First, we've been a specialized player in the anti-malware business, but I certainly do see the opportunity to do more things once you are installing an endpoint to be able to use the same management approach and be able to configure the PC, or to do a remote session on it based on the same console. For now, we're just doing the full anti-malware and personal firewall in this way, but we do see the opportunity of doing more PC lifecycle management functionalities within it.

Gardner: That brings us back to the economy. Phil, I've heard grousing from CEOs, administrators, and just about anybody in the IT department for years about how expensive it is, from the total cost perspective, to maintain a rich PC-client experience. Nowadays, of course, we don't have a luxury of, "It would be nice to cut cost." We really have to cut cost. Do you see a significant move towards more cloud-based services as an economic imperative?

Increasing the SaaS model

Wainewright: Oh yes, and one of the interesting phenomena has been that things like help desk, security, and remote support have increasingly been delivered using the SaaS model, even in large enterprises.

If you are the chief security officer for a large enterprise that's very dependent on the Web for elements for its operations, then you've got a tremendously complex task. There's an increasing recognition that it's much better to access pools of expertise to get those jobs done, than for everyone trying to become a jack of all trades and inevitably fall behind the state of the art in the technology.

More and more, in large enterprises, but also in smaller businesses, we're seeing people turning to outside providers for expertise and remote management, because that's the most cost effective way to get at the most up-to-date and the most proficient knowledge and capabilities that are out there. So yes, we're going to see more and more of that, spot on.

Gardner: I understand how this is a benefit to end-users -- a simple download and you're protected. I understand how this makes sense for SMBs who are trying to manage PCs across distributed environment, but without perhaps having an IT department or a security expertise on staff. But, I'm not quite sure I understand how this relates now to an additional business model benefit to a reseller or a value-added provider of some kind, perhaps a managed service provider.

Josu, help me understand a little bit better how this technology shift and some of these new products benefit the channel.

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering.



Franco: In the current economic times, more and more resellers are looking to add more value to what they are offering. For them, margins, if they're selling hardware or software licenses, are getting tougher to get and are being reduced. So, the way for them to really see the opportunity into this is thinking that they can now offer remote management services without having to invest any amount in what is infrastructure or in any other type of license that they may need.

It's really all based on the SaaS concept. They can now say to the customers, "Okay, from now on, you'll forget about having to install all this management infrastructure in-house. I'm going to remotely manage all the endpoint security for you. I'm going to give you this service-level agreement (SLA), whereby I'm going to check the status of your network twice or three times a week or once a day, and if there is any problem, I can configure it remotely, or I can just spot where the problems are and I can fix them remotely."

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering. We believe that there is a clear alignment among the interests of end users and partners, and, most importantly, also from our side with the partners. We don't want to replace the channel here. What we want is to become the platform of choice for these resellers to provide these value-added services.

Gardner: Does Panda then lurk behind the scenes, the picks and shovels for solution? Do you allow them to brand around it? Are you an OEM player? How does that work?

Franco: We can certainly play with a certain level of branding. We've been doing so with some large sales that we've made, for example, here in Spain. But, most of them want to start touching and kicking the tires and see if it works. They don't need the re-branding in the first instance, but yes, we've seen some large providers who do want some customization of the interface for their logos, and that's certainly a possibility.

Gardner: We've also seen in the market more diversity of endpoints. We've seen, for cost and convenience, reason to move towards netbooks. Smartphones have certainly been a fast growing part of the mix, despite the tough economy. This model of combining the best of SaaS, the best of cloud, and a small agent coordinating and managing them, strikes me as something that will move beyond the PC into a host of different devices. Am I wrong on that Phil?

Attacking the smartphones

Wainewright: No, you're absolutely right. One of the scary things is that many of us are carrying around smartphones now. It's only a matter of time before these very capable, intelligent platforms also become vulnerable to the kind of attacks that we've seen on PCs.

On top of that, there is a great deal more support required to make sure that the users gets the best out of those devices. Therefore, we're going to see much more of this kind of remote support being provided.

For example, the expertise to support half a dozen different types of mobile devices within our organization is something that the typical small business can't really keep up with. If they're able to access a third-party provider that has got the infrastructure and the experts on how to do that, then it becomes a manageable issue again. So, yes, we're going to see a lot more of this.

Ultimately, it's going to give us a lot more freedom just to be able to get on with our jobs, without having to worry about understanding how the device works, or even worse, working out how to fix it when something goes wrong. Hopefully, there will be much fewer instances when that downtime happens.

Gardner: Well, let's hope that we nip the bud here for this malware on multiple devices in the cloud before it ever gets to the device, and that removes the whole incentive or rationale

I think that we're going to see a convergence between the world of the consumer and the world of what we call a business.

for trying to create these problems in the first place. So, maybe moving more into the cloud actually starts stanching the problem from its root and core.

Let's move forward now to some of the proof points. We've talked about this in theory. It certainly makes sense to me from an architectural and vision perspective, but what does it mean in dollars and cents? Josu, do you have any examples of organizations that have started down this path -- SMBs perhaps, and/or resellers? How has this affected their bottom line?

Franco: Yes, we do have very good examples of people who have moved along this path. Our largest installation with the Managed Office Protection product is over 23,000 seats in Europe. It's a very large school or education institution, and they're managing their entire network with just a very few people. This has considerably reduced their operating cost. They don't need to travel that much to see what's happening with their systems.

We also have many other examples of our resellers that are actually using this product, not only to manage business spaces, but also managing even consumer spaces. I think that we're going to see a convergence between the world of the consumer and the world of what we call a business.

Moving to the consumer space

Some analyst friends are talking a lot about the consumerization of IT. I think that we'll also see that consumers are going to start using technologies that perhaps we thought belonged in the business space. I'm talking, for example, about the ability for a reseller to centrally manage the PCs of consumers. This is an interesting business model, and we have some examples of this emerging trend. In the US, we have some researchers who are managing thousands of computers from their basement.

So, even though our intention was to position this product for SMBs, we do see that there are some verticalized niches in the market into which this model fits really well. Talking about highly distributed environments, what's more highly distributed than a network of consumers, everyone in their own home, right? So, I think this is definitely something that we're going to see happening more and more in the future.

Gardner: Without going down this very interesting track too much, we're starting to see some CIOs cotton to the notion of letting people pick their end device, but then accessing services back in the enterprise, and with some modest governance and security. It sounds as if a service like this might fill that role.

Then, in addition to the choice of the consumer or end user on device, it seems to me that we're also in a position now for the providers of the bit pipes -- the Internet, telephony,

The value that's being created and is being shared out by the vendors and the providers in the SaaS model is that time saving and opportunity cost saving

communications, and collaboration -- to start offering the whole package, a PC with security, remediation, protection, and you pay a flat fee per month. Do you think these two things are around the corner, Phil, or maybe three or four years out?

Wainewright: To the previous point, people often think of the consumer Web as completely separate from the business Web. In fact, the reality today is that individual users at home are just as likely to be doing business things or work things on their home PCs as they are to be doing actually home things or even side businesses on their work PCs.

If someone is auctioning off their collection of plastic toys on eBay, then are they an individual consumer or are they a business? The lines are shading. I think what you need to look at is, what is the opportunity cost? If it's going to cost me time that I can't afford, or if it's going to mean that I'm not going to be able to earn money that I could otherwise be earning, then it's going to be worth my while to pay that monthly subscription.

One of the key things that people forget, when they're comparing the cost of a SaaS solution or a Web provided solution to a conventional installed piece of packaged software, is they never look at the resource and time that the user actually spends to get things setup with the packaged software, to fix things when they go wrong, or to do upgrades.

The value that's being created and is being shared out by the vendors and the providers in the SaaS model is that time saving and opportunity cost saving.

Gardner: Now, we have to assume that the security is going to be good, because if it doesn't protect, then that's going to become quite evident. But what we're also talking about, now that I understand it better, Josu, is really we're focusing on simplicity and convenience vis-à-vis these devices, vis-à-vis security, but also in the larger context of the level of comfort, of trust that the device will work, that the network will be supported, and that I'm not going to run into trouble. Is that what we're really talking about here as a value proposition -- simplicity and convenience?

Franco: As you said, it needs to protect. It needs to be very effective at a time when we're seeing really huge amounts of malware coming out every day. So, that's preconditioned. It needs to protect.

But if it's something that is going to be there protecting users, and many users see security as something that they need to live with, it's not truly something that they see as a positive application that they have. It's something that sometimes annoys people. Well, let's make it as simple, as transparent, as fast, as imperceptible as possible, and that's what this is all about.

Gardner: Very good. We've been learning a lot today about PC security and how it can be delivered as a service in conjunction with the cloud-based central management and processing. This architectural approach is now quite prominent for security, and perhaps will become more prominent across other aspects of client device support and convenience and lower cost and higher trust. So a lot of goodness. I certainly hope it works out that way.

Cost and protection benefits, along with productivity benefits, and as a result less downtime, is a good thing. We've looked at it across the spectrum of end users and businesses, resellers, and managed service providers. Helping us understand this we've been joined by our panel. I want to thank them. Phil Wainewright, independent analyst, director of Procullux Ventures, and a ZDNet SaaS blogger. I appreciate your time, Phil.

Wainewright: It's been great to be with you today, Dana.

Gardner: We've also heard from Josu Franco, director of the Business Customer Unit at Panda Security. Thank you Josu.

Franco: It's been my pleasure, thanks.

Gardner: I also want to thank the sponsor of this discussion, Panda Security, for underwriting its production.

This is Dana Gardner, principal analyst at Interarbor Solutions, thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Transcript of a BriefingsDirect podcast on security as a service and cloud-based anti-virus protection and business models. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, July 14, 2009

Rethinking Virtualization: Why Enterprises Need a Sustainable Virtualization Strategy Over Hodge-Podge Approaches

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on rethinking virtualization. We’ll look at a series of three important considerations when moving to enterprise virtualization adoption.

First, we'll investigate the ability to manage and control how interconnections impact virtualization. Interconnections play a large role in allowing physical servers to support multiple virtual servers, which themselves need multiple network connections. The connections themselves can be virtualized, and we are going to learn how HP Virtual Connect is being used to solve these problems.

Second, we're going to examine the role and importance of configuration management databases (CMDBs) in deploying virtualized servers in production. When we scale virtualized instances of servers, we need to think about centralized configuration, it really helps in bringing management to this crucial part of preventing server sprawl and an unwieldy complexity that can often impact the cost of virtualization projects.

Last, we're going to dig into how outsourcing in a variety of different forms, configurations, and values could help organizations get the most bang for their virtualization buck. That is to say, how they think about virtualization not only in terms of placement, but also in where that data center and even hybrid data centers will be residing and managed.

Here to help us to dig into these essential ingredients of successful and cost-effective virtualization initiatives, are three executives from Hewlett-Packard (HP).

When we scale virtualized instances of servers, we need to think about centralized configuration

We're going to be speaking with Michael Kendall, worldwide Virtual Connect marketing lead. We're also going to be joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions. And last, we're going to discuss outsourcing with Ryan Reed, a product manager for EDS Server Management Services.

First, I want to talk a little bit about how organizations are moving to virtualization. We certainly have seen a lot of the "ready, set, go," but when organizations start looking at the complexity, when they think about scale, when they think about the need to do virtualization for the economic pay-off, rather than simply moving one shell around from physical to virtual, or from on-premises to off-premises, the complexity in the issue starts to sink in.

Let me take our first question to Shay Mowlem. Shay, what is it that we're seeing in terms of how companies can make sure that they get a pay-off economically from this, and that it doesn’t become complexity-for-complexity's sake?

Shay Mowlem: The allure of virtualization is quite great. Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

The fact that virtualized infrastructure has more interdependencies means there’s more of a risk profile because of the services that are supported. The real challenge for those companies is putting in place the right management platform in order to be able to truly recognize those gains for those production environments.

Gardner: So, when we talk about rethinking virtualization, I suppose that it really means planning and anticipating how this is going to impact the organization and how they can scale this out?

Mowlem: Yeah. That’s exactly right.

Looking at connections

Gardner: First, we're going to look at the connections, some of the details in making physical servers become virtual servers, and how that works across the network. Mike Kendall is here to tell us about HP’s Virtual Connect technology.

It’s designed to help bridge the gap between the physical world and virtual world, when it comes to the actual nitty-gritty of making networks behave in conjunction with increased numbers of virtualized server instances. This is important when we start rethinking virtualization in terms of actually getting an economic payback from the investments and the expectations that enterprises are now supporting around virtualized activities.

So, let me take it to you Mike. When we go to virtualized infrastructures from traditional physical ones, what’s different about migrating when it comes to these network connections?

Michael Kendall: There are a couple of things. When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

Interestingly, people have found that as you do that, it has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need to accommodate all that extra workflow that’s going on that sever.

So, just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs. But, even when you move it, you have to take down and re-setup those particular network connections. Being able to do that in a way that is harmonious is more challenging within a virtual machine environment.

Gardner: So, it’s not quite as easy as simply managing the hypervisor. We have to start thinking about managing the network. Perhaps you could tell us more about how the Virtual Connect product itself does that.

Basic rethinking


Kendall: Absolutely. Virtual Connect is a great example to follow. HP helps you achieve the full potential you get out of setting up virtual machines on a server and being able to consolidate all those workloads.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Rather than taking hours, days, or even weeks to get a move set up, by either setting up, adding to or moving virtual machines or physical machines, we're able to take that down literally to minutes. The result is that most deployments or moves can be accomplished a whole lot faster.

Another part of this is our new Flex-10 technology. That takes a 10-gigabit Ethernet connection and allocates that across four NIC connections. This eliminates the need for additional physical NICs in the forms of mezzanine cards or stand-up cards, additional cables, or additional switches, when setting up all of the extra connections required for virtual machines.

The average hypervisor is looking for anywhere from three to six NIC connections, and approximately two storage network connections.

If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

If you add that all up, that can be up to a total of six to eight NICs, along with the associated cables and switch ports. The same thing is true with the two storage network connections as well.

With Flex-10, on an average two-port NIC, you can have each one of those ports be able to leave four NICs for a total of eight, without having to add any additional stand-up cards, any additional switches, or the cables with it. As a result, from a cost standpoint, you can save up to 66 percent in additional network equipment cost over competing technology. So, with Virtual Connect you can wire everything once and then add, replace, or recover servers a whole lot faster.

Gardner: And, of course, not doing this in advance would erode your ability to save when it comes to these more utilized server instances.

Kendall: That’s also correct. If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

Gardner: One of the things that folks like about virtualization is an automated approach to firing off instances of servers to support an application -- for example, a database. Does that automated elasticity of generating additional server instances follow through with the Virtual Connect technology so that it’s, in a sense, seamless.

Seamless technology

Kendall: I'm glad you added in the Virtual Connect part, because if you had said "using standard switch technology," the answer to that would be no.

With standardized switch technology and standardized NIC and storage area network (SAN) HBA technology, you generally have to set up all these connections individually. Then, you have to manage them individually. Then, if you set up, add to, or migrate virtual machine instances from the virtual machine (VM) side of it, you can automate a lot of that through a hypervisor manager, but that does not extend to the attributes of the actual server connection, or the virtual machine connection.

Virtual Connect, because it does virtualize those instances in a way that you manage them, makes it very straightforward to migrate the server connections and their profiles, not only with the movement of virtual machines, but also the movement of whole hypervisors across physical machines as well. It extends the physical and the virtual, and handles the automation and the migration of all those connection profiles.

Gardner: So, we're gaining some speed here. We’re gaining mobility. We're able to maintain our cost efficiencies from the virtualization, because of our better management of these network issues, but don’t such technologies as soft switches pretty much accomplish the same thing?

Kendall: Soft switches can be an important part of the infrastructure you put together around virtual machines. One of the things about soft switches is that it’s really important how you use them. If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network. If you use Virtual Connect, which is based upon industry-standard protocols together with a soft switch operating in a simple pass-through type of mode, then you don’t have the latency problem. You maintain the flexibility of Virtual Connect.

The other thing you need to be careful of is that some of the new soft switches out there use proprietary protocol extensions to accomplish the ability to

If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network.

track the movement of the virtual machine, along with its associated connection protocol. These proprietary protocol extensions sometimes require upstream products that can accept these protocol extensions and require new hardware, switches, and management tools. That can add a lot to the cost to upgrading an infrastructure.

Gardner: Thank you Michael. We’re now going to look at another important issue around virtualization, and that is configuration and management. This has become quite an issue in terms of complexity. Managing the physical servers, when we get into the large numbers, is, in itself, complex. When we add virtualization and dynamic provisioning and look to recover cost from energy and utilization, we add yet another dimension to the complexity.

We’re going back to Shay Mowlem. We’re going to talk a little bit about this notion of data collection, management, configuration, and automation along this line. So, we'll talk about how visibility into the requirements of what’s going on in the virtualization instances, data centers, and across the infrastructure becomes critical. How are companies gaining better visibility across the virtualized data center, compared to what they were perhaps doing to the purely physical ones?

Mowlem: IT infrastructures really are becoming more ambiguous. With the addition of virtual machines to data centers that are already leveraging other virtualization technologies in their storage area networks -- virtual LANs and so on -- all of that makes knowing where a problem exists much harder to identify and fix. That has an impact on management cost and service quality.

Proof for the business

For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost, as they incorporate virtualized servers and storage to support the systems. We've seen that the ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

It also needs to include the ability to monitor the health of the infrastructure, but also from the perspective of the business service. In other words, be able to monitor and understand all of the infrastructure elements, how they relate to one another, servers, networked storage, and then also be able to monitor the health and the performance of the service from the perspective of the business user.

It's sort of a bottom-up and top-down view if you will, and this is an area that HP Software has invested in very heavily. We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis.

We also track what has changed over time, what was implemented, and who made those changes. Then, we can leverage that information very carefully to solve important questions with regards to how a particular service has been behaving over time?

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

This can be quite profound. We found that through an return on investment (ROI) model that we worked on, based on data from IDC, that effective utilization of HP’s Discovery and Dependency Mapping technology and storing that in a central UCMDB, on average can help reduce the mean time to repair of outages by 76 percent, which is a massive benefit through effective consolidation of this important data.

Gardner: Maybe I made a mistake that other people commonly make, which is to think of managing virtualized instances as separate and different. But, I suppose virtualization nowadays is becoming like any other system across the IT infrastructure.

Mowlem: Absolutely. It’s part of a mix of tools and capabilities that IT has that, in production environments, are ultimately there to support the business. Having an understanding of and being able to monitor all these systems, understanding their interdependencies, and managing them in an integrated way with the understanding of that business outcome, is a key part of how companies will be able to truly recognize the value that virtualization has to offer.

Gardner: Okay, I think we understand the problems around this management issue in trying to scale and bring it into a similar way in which the entire data center is managed. What about the solutions? What particularly didn’t organizations consider when approaching this total configuration issue?

Business service management

Mowlem: We offer a host of solutions that help companies manage virtualized environments end to end, but as we look at monitoring -- and essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time -- we talk about the business service management portfolio of HP Software. This includes the Discovery and Dependency Mapping product that I talked about earlier. UCMDB is a central repository, and a number of tools allow our customers to monitor their infrastructure at the server level, at the network level, but also at the service level, to ensure ongoing health and performance of their environment.

Gardner: You mentioned these ROI figures. Typically, is there any comparison to how organizations will start down the virtualization path? How they can then begin to recover more cost and cut their total cost by adopting some of these solutions?

Mowlem: We offer a very broad portfolio of solutions today that manage many different aspects of virtualization, from testing to ensuring that the performance of a virtualized environment in fact meets the business service level agreements (SLAs). We talked about monitor already. We have automation as part of our portfolio to achieve efficiency in provisioning and change execution. We have a solution to manage assets, so that software licenses are tracked carefully and properly.

We also have a market-leading solution in backup recovery with our Data Protector offering to help customers scale their backup and recovery capabilities across their virtualized

We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

servers. What we’ve found in the course of our discussions is that there are many customers that recognize that all of these are critical and important areas for them to be able to effectively incorporate virtualization into their production environments.

But, generally there are one or two very significant pain areas. It might be the inability to monitor all of their servers -- physical and virtual -- through one single pane of glass, or it maybe related to compliance enforcement, because there are so many different elements out there. So, the answer isn’t always the same. We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

Gardner: Well, I suppose it’s never too late to begin. If you’re even partially into a virtualization initiative, or maybe even deep in and you’re starting to having problems, there are ways in which you can bring in management features at any particular point in that maturity.

Mowlem: We definitely support a very modular offering that allows people to focus on where they’re feeling the biggest pain first, and then expand from there as it makes sense to them.

Gardner: Let’s now move over to Ryan Reed at EDS. As organizations get in deeper with virtualization and as they consider on a larger scale their plans for their modernization and consolidation and overall cost efficiency of their resources, how do they approach this problem of placement? It seems that when you move towards virtualization it almost forces you to think about your data center in a more holistic and long-term and strategic perspective.

Raising questions

Ryan Reed: Right, Dana. For, a lot of companies when they consider large-scale virtualization and modernization projects, it often raises questions that help them to devise the plan and devise strategy around how they’re going to create a virtual infrastructure and where their infrastructure is going to be located.

Some of the questions that I see are around the physical data center itself. Is the data center itself meeting the needs of the business? Is it designed in a way that can be built for resiliency and provide the greatest value to the business services?

You’ll also find that a lot of times that’s not the case nowadays for the data centers that were built 10 or 15 years ago. Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Another question that often comes up is around the storage network infrastructures. Where they are located physically. Are they in the right place? Is it available at the right times? A lot of organizations may be required by legislative regulatory requirements to keep their data within a particular state or country, national boundaries, or region. A lot of the times, when people are planning for virtual server infrastructures, that comes to be a pretty prominent discussion.

Another one would be around internal skill sets of the enterprise. Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

Lastly, a lot of companies, when they’re doing virtualization projects, start to question, whether or not all of the activities around managing the infrastructure is actually core to their business. If it’s not core to their business, then maybe this is something that they don’t have to be doing themselves anymore.

Taking all that into consideration helps to drive a conversation around planning and being able to create the right type of process. Often times, it leads to a discussion around outsourcing. EDS, which is an HP company, does provide organizations and enterprises for full IT management, and IT infrastructure management. That includes everything from the implementation to ongoing management of virtual as well as non-virtual infrastructure environments.

Client data center or on-premises you called it, Dana, is an option that is available for a lot of enterprises out there that have already invested heavily into their current data-center facility, as well as the infrastructure. They don’t want to necessarily move it to an outsourcer supplied data center. So on-premises is a business model that’s available and becoming common for some of the larger virtualization projects.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So, they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase.

Making the most sense

This makes the most sense for these types of organizations, because you’re going to be doing a migration from physical to virtual anyway. So, you might as well take advantage of the skills that are available from the outsourcing services provider to move that to their data center, and have them apply best-in-breed practices and technology to manage that infrastructure.

Then you also mentioned what would be considered like a hybrid model, which would be one where virtual infrastructure and non-virtual infrastructure can be managed from either client or organization-owned data center, or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Gardner: Is there anything about virtualizing your data center and more and more servers that makes outsourcing perhaps easier or an option that some people that hadn’t considered in the past and should?

Reed: Sure. Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership (TCO).

Doing the actual implementation, the ongoing management of the operating systems, both virtual and non-virtual for guests and hosts, patching of the system,

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

monitoring to make sure that the systems are up and running responding to events, escalating events, and then doing things like backup and restore activities of the systems are really core to an outsourcing services provider’s business. That’s what they do.

We don’t expect our clients to have the same level of expertise as EDS does. We’ve been doing this for 45 years, and it’s really the critical piece of what we do. So, there are many things to consider when choosing an outsourcing provider, if that’s the way to go. Benefits can range dramatically from reducing your TCO to increasing levels of availability within the infrastructure, and then also being able to expand and use the services provider, global delivery service centers that are available around the world.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

Additionally, you can take advantage of things like low-cost delivery centers that the services provider has built up over the years -- services centers that are from low-cost regions. EDS considers this to be the best strategy. Having resources available in low-cost countries to provide the greatest value to clients is important when it comes to understanding the best approach to selecting a good services provider.

Gardner: So, for those organizations that are looking at these various options for sourcing, how do they get started? What’s a good way to begin that cost benefit analysis?

Reed: Well, there’s information available through the eds.com website. Go there and search on "virtualization" and you’ll find the first search result that comes back that has lots of information around what to expect in terms of an engagement, as well as examples of where virtualization has been done with other organizations similar to what a lot of industries are facing out there.

You can see a comparison of like-for-like scenarios to determine whether or not a client engagement would make sense here, based on the case studies and success stories that are available out there as well. There are also industry tools that are available from our partner organizations. HP has tools available. VMR has tools available to help our clients understand where savings can come from. And, of course, EDS is also available to provide those types of services for our clients too.

Gardner: Okay. We’ve been looking at three important angles to consider when moving to virtualization, being aware at a detail level how the network, interfaces and connects work, moving towards more virtualized approach to interconnects. We also looked at the management issues -- configuration not only in the terms of how virtualized servers stand alone. They need to be managed, but managed in total, in terms of the part of the larger IT mix. We also looked at how to consider some different options in terms of cost and skills, availability of resources, energy cost, and general track record of being competent and proven with virtualization in terms of various sourcing options.

I want to thank our three guests today. We’ve been joined by Michael Kendall, worldwide Virtual Connect marketing lead at HP. We've been joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions, and Ryan Reed, product manager for EDS Server Management Services.

This is Dana Gardner, principal analyst at Interarbor Solutions, we want also to thank the sponsor of our podcast discussion today, Hewlett-Packard, for underwriting its production. Thanks for listening and come back next time.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Download a pdf of this transcript.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, July 06, 2009

Consolidation, Modernization, and Virtualization: A Triple-Play for Long-Term Enterprise IT Cost Reduction

Transcript of a BriefingsDirect podcast on how IT departments can provide better services with greater efficiency.

Listen
to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on combining some major efforts in IT administration and deployment, in order to both cut costs in the near term and also to put in place greater efficiencies, agility, enterprise business benefits, and long-term cost benefits.

We’re going to be talking about how consolidation, modernization, and virtualization play self-supporting roles alone and in combination for enterprises looking to improve how they deliver services to their businesses. Yet they also play a role in reducing labor and maintenance cost, and can have much larger benefits -- including producing far better server utilization rates -- that ultimately cut IT costs in total.

Here to help us dig into the relationship between a modern and consolidated approach to IT data centers and total cost, we welcome John Bennett. He’s the worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard (HP). Welcome to the show, John.

John Bennett: Thank you, very much. It's nice to be with you today.

Gardner: As I mentioned, cost is always an issue with organizations and IT departments are among those, facing a lot of pressure nowadays to justify their expenses, show improvements in cost cutting, and, at the same time, improve productivity. John, I wonder if you could help us understand this. We know well enough the cost pressures and economic environment that we’re in, but what has changed in terms of what can be brought to this problem set from the perspective of technology and process?

Bennett: Cost, itself, is easy and complex to deal with. It’s easy to say, "reduce costs." It’s very difficult to understand what types of costs I can reduce and what kind of savings I get from them.

When we look at reducing cost, one of the keys is to get a handle around what costs you're really looking to address and how you can address them. It turns out that many of the cost dimensions can be addressed through a common and integrated approach, building on recent advances in both technology and management and automation tools, on virtualization, and on the investments that companies like HP have been making in focusing on enhancing the energy efficiency and the manageability of the servers and infrastructure that we provide to customers.

This is why, in my mind, the themes of consolidation, which people have been doing forever; modernization, very consciously making decisions to replace existing infrastructure with newer infrastructure for gains other than performance; and virtualization, which has a lot of promise in terms of driving cost out of the organization can increase aspects like flexibility and agility that you mentioned earlier on. It's the ability to respond to grow quickly, to respond the competitive opportunity or threat very quickly, and the ability for IT to enable the business to be more aggressive, rather than becoming a limiting factor in the roll-out of new products or services.

Gardner: We’re certainly well aware of what’s changed in the macroeconomic climate over the last year or so, but what’s different from two or three years ago, in terms of what we can bring to the table to address these general issues about cost. In particular, how we can modernize, consolidate, and get those energy benefits?

Other issues pop up

Bennett: Besides the macro factors around economics that have come into play, we’ve seen some other issues pop up in the last several years as well. One of them is an increasing focus on green, which means a business perspective on being green as an organization. For many IT organizations, it means really looking to reduce energy consumption and energy-related costs.

We’ve also seen in many organizations, as they move to a bladed infrastructure and move to denser environments, that data center capacity and energy constraint, the amount of energy available to a data center, is also an inhibiting factor. It’s one of the reasons that we really advise customers to take a look at doing consolidation, modernization, and virtualization together.

As I briefly touched on earlier, this has been enhanced by a lot of the improvements in the products themselves. They are now instrumented for increasing manageability and automation. The products are integrated to provide management support not just for availability and for performance, but also for energy. They're instrumented to support the automation of the environment, including the ability to turn off servers that you don’t know or care about. They’re further enhanced by the enhancements in virtualization. A lot of people are doing virtualization.

What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively. In many cases, it's impacting governance of the data center.

This is why we look at them together. By combining them and taking an integrated approach to them, you not only don’t raise for yourself the issues that some other people may be experiencing, but you can use them to address a broad set of issues, and realize aspects of a data center transformation by approaching these things in an orderly and planned way.

Gardner: We’ve talked about how energy issues are now coming to be much more prominent, cost being a critical issue. Is there anything different about the load, about the characteristic of what we’re asking data centers to do now, than perhaps 5, 10, or 15 years ago that plays into why I would want to modernize and not just look to cut cost?

Bennett: The increasing density of devices in the data-center environment -- racks and racks of servers, for example -- have both increased the demand for power to run them, but, in many cases, have created issues related to cooling from heat in the environment. That has been a trend that has exposed people to risk factors related to energy that they hadn’t experienced before when they had standalone servers or mainframes in the environment.

With virtualization, we also see increasing density and concentration of devices, because you're really separating the assets -- servers, storage and the networking environment --

What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively.

from the implications in the business services they are providing. It becomes a shared environment and your shared environment is just more productive and more flexible if it’s one shared environment instead of 3, 4, 5 or 10 shared environments. That increases the density and it goes back to these other factors that we talked about. That’s clearly one of the more recent trends of the last few years in many data centers.

Gardner: I see. So, it’s where we may have had standalone hardware, software applications, siloed or mainframe. When you virtualize, you’re able to distribute the load and therefore look to have much greater ability to increase your utilization generally rather than just at a hit-or-miss basis.

Bennett: Absolutely. I don’t think I could have said it better myself.

Gardner: Tell us a little bit more about green. If we can increase the utilization rates vis-à-vis, what we’re doing with consolidation and virtualization, we also have to look at what we’re doing in the total consumption for electricity and what that means in terms of carbon footprint. Isn’t that possible that we could be looking at ceilings or even regulations on what we can do there?

Capacity is an issue

Bennett: You run into both aspects. Capacity is clearly an issue that has to be addressed, and increasing regulation and governance is as well. We saw the emergence in Europe in the last few months of the Data Center Code of Conduct emerging as a standard for recommending best practices for data centers.

We see an increasing focus in countries like the UK on regulation around energy. There are predictions that that’s going to accelerate in a number of places around the world. So those become part of the environment that data center managers have to deal with, and they can have severe implications for organizations, if they are not compliant.

Gardner: Those have really gone beyond "nice to have" or a way to reduce cost to a "must have."

Bennett: In many cases, that’s very true. Also, there are organizations that had made decisions to be green, where the senior executives and board of directors have made that decision. It’s a management directive and one you have to comply with, independent of government regulations. So, they're coming at you from all sides.

Gardner: I suppose another aspect of this is when you’ve modernized, consolidated, and virtualized your data centers over time, you're further able to automate. You're reducing the amount of labor and manual processes. This strikes me as something that provides an opportunity to manage change better.

Bennett: Yes. When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment. That makes it much easier not only to manage the environment with a smaller numbers of sysadmins, but gives you a much greater opportunity to automate the processes and procedures.

What we see is the infrastructure enabling this. As I mentioned earlier, we're making significant investments in management, business service management, and automation tools to not only integrate infrastructure management with business service management, but also to have an integrated view of physical and virtual resources with line of sight from the infrastructure and the devices all the way up into the business services being provided.

So, you really have full control, insight, and governance over everything taking place in the data center. Many of those are very new capabilities in the HP product suite. Many of these have been announced within the last 12 months.

Gardner: Then, being able to get better automation and standardization across my data center, I should be able to react to the business requirements more quickly.

When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment.

You scale up, scale down, or even shift course better than we would have done in the past.

Bennett: Yes, we use the marketing phrases "flexibility and agility" for that, but what it means is that I no longer have the infrastructure and the assets tied to specific business services and applications. If I have unexpected growth, I can support it by using resources that are not being used quite as much in the environment. It’s like having a reserve line of troops that you can throw into the fray.

If you have an opportunity and you can deploy servers and assets in the matter of hours instead of a matter of days or months, IT becomes an enabler for the business to be more responsive. You can respond to competitive threats, respond to competitive opportunities, roll out new business services much more quickly, because the processes are much quicker and much more efficient. Now, IT becomes a partner in helping the business take advantage of opportunities, rather than delaying the availability of new products and services.

Gardner: These are very important parts of the energy issue, the cost reduction upfront, the ability to be more fleet and agile, and improving the role and responsibility that IT can provide. You won’t have trouble getting people interested in solving these problems, but we get to the point of how we get into a solution, where we can bring these new technological innovations to bear. How do you get started? Where do you focus?

Experience is important

Bennett: How you get started and where you focus really depends on an individual customer and their organization, their capabilities, their staff capabilities, their staff resources, and their experience. Many people are well experienced at doing consolidation projects, and they've been doing virtualization. They have a staff very experienced in looking at things from a business service perspective. For many of them, modernization of the infrastructure, on top of what they've already been doing more aggressively than they’ve done in the past, may be a step to take.

There are certainly tools and capabilities like the Discovery and Dependency Mapping software to help keep an eye on assets and asset configurations, but we are seeing value in being more aggressive in modernizing infrastructure. Typically, people replace servers, for example, on a four to five year cycle, some as aggressively as three but typically four to five years.

In some of the generations of servers that we’ve released, we see 15 to 25 percent improvements from a cost perspective and an energy consumption perspective, just based on modernizing the infrastructure. So, there are cost savings that can be had by replacing older devices with newer ones.

People who have been growing through acquisitions or mergers or for whom individual lines of business control assets on their own, may need to be a little more methodical in building up the picture of just what they have and whether or not they have any -- what some in the industry refer to as -- ghost or zombie servers.

Ken Brill of the Uptime Institute, for example, figures that most people have about 15 percent of their servers not doing anything. The question is how you find out what they are.

Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate.

If you're going to do consolidation, how do you find out which things are connected to which? For people in that kind of situation, the Discovery and Dependency Mapping software is a wonderful way to go. That’s available for purchase, of course, or it can be delivered from HP services.

They identify all of the assets in the environment, the applications, software they're running, and the interdependencies between them. In effect, you build up a map of the infrastructure and know what everything is doing. You can very quickly see if there are servers, for example, not doing anything.

Gardner: I suppose from that perspective, you can say, We're going to take a couple of spot projects where we know we are going to get a big hit in terms of our return and savings," or "Because of our medium-level solution approach, we're going to start taking out full application sets or sets of services, based on some line of business or geographic definition." Or, we might even go whole hog, if that’s what we're looking at -- more of a data-center modernization to the next generation. All of those seem possible.

Bennett: Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate. You reduce software license cost, maintenance cost, energy consumption, etc. After that, there are several approaches you can take. You can do a peer consolidation.

If I've got 10 servers doing this particular application and I can have that support the environment by using 3 of those servers, get rid of 7. I can modernize the environment, so that if I had 10 servers doing this work before, and the consolidation gives me the opportunity to go to only to 6 or 7, if I modernize, I might be able to reduce it to 2 or 3.

On top of that, I can explore virtualization. Typically, in environments not using virtualization, server utilization rates, especially for industry standard servers, are under 10 percent. That can be driven up to 70 or 80 percent or even higher by virtualizing the workloads. Now, you can go from 10 to 3 to perhaps just 1 server doing the work. Ten to 3 to 1 is an example. In many environments, you may have hundreds of servers supporting web-based applications or email. The number of servers that can be reduced out from that can be pretty phenomenal.

Gardner: And, all the while, we're reducing physical footprint or the amount of labor required, and we're cutting the energy footprint.

Laying the groundwork

Bennett: All of the above -- and also laying the groundwork for a next-generation data center. We call it an adaptive infrastructure, but the idea is to have this shared resource environment that is virtualized, automated, and capable of shifting assets and putting assets where they’re needed, when they’re needed, and pretty dynamically being able to support growth, pretty seamlessly.

If you take an integrated approach to this by looking at consolidation, modernization, and virtualization together, you actually lay the foundation for that adaptive infrastructure. That’s a real long-term benefit that can come on top of all of the short- and near-term benefits that come with cost reductions and energy savings.

Gardner: We’ve certainly heard about the energy, utilization, and moving to virtualization. They’ve reduced the number of actual servers and therefore the number of people. Do you have examples of some organizations that have gone after these benefits and what sort of experience that they have?

Bennett: We have a lot of examples with people looking to save money. What’s more interesting is to look at a couple of examples of people who have had other objectives, and how they realized those objectives through consolidation, modernization, and virtualization.

An example is a company called MICROS-Fidelio. They provide integrated IT solutions for the hotel industry. They also were looking to improve their competitive advantage and they very specifically were looking at accommodating business growth, even though they had severe limitations in terms of data center space and power capacity. They really didn't want to be investing money in either of those two areas.

They standardized and virtualized their environment using HP blade systems and HP Insight Dynamics. They saw, in terms of business benefits,

If you take an integrated approach to this by looking at consolidation, modernization, and virtualization together, you actually lay the foundation for that adaptive infrastructure.

a 45 percent reduction in missed service-level agreement (SLA) objectives, which meant it reduced the penalties they were paying to their customers by being more predictive in providing better quality of service.

Gardner: In fact, immediate payback.

Bennett: Immediate payback, and not just in terms of cost savings, but in terms of brand reputation. They also had a 50 percent annual growth rate in the data center, which was supported with just a 25 percent increase in IT staff.

They didn’t provide us an absolute dollar figure, but they saved “six figures a year” in personnel cost. This was avoided by being able to do rolling updates to the environment, instead of static updates. Then, they had a threefold faster time in deploying new servers. Again, it was a pretty comprehensive set of benefits, not in just cost savings, but in terms of agility and flexibility, energy, and dealing with space and energy constraints by taking a systematic and integrated approach to consolidation, modernization and virtualization.

Gardner: Before we wrap up, John, I’m really fascinated by this notion of additional automation that the more modern the systems are, the more virtualized, the more ability you have to bring in management capabilities that allow that automation to the almost take off on a hockey stick kind of curve. Not that we want to take people out of the equation, but we want those people to be well utilized themselves. So, what does the future have in store for us in terms of moving the needle yet even further?

Tight control

Bennett: You’ll see improvements in a number of areas. Clearly, at the infrastructure level, we continue to make sure we’re doing everything possible to ensure that the assets themselves are instrumented to be controlled as tightly or as loosely as an organization would like to.

We’re making a lot of investments in ensuring that the physical and virtual assets are managed in a consistent and integrated way, because from a business service’s perspective, the business service doesn’t care where it’s running. But, if you have issues in terms of quality of service, you need to make sure you can track it down through the environment and for that, an integrated view of both is necessary.

Then third, we see the increasing focus on automating standard procedures in business processes and business service management and automation. That has to stretch from the business service down to the infrastructure management, down into the virtual resources, and down into the physical resources. So, it's an ongoing investment in integrating those capabilities, extending the capabilities for the software portfolios, and making sure that that control extends down into the depths of the hardware.

We also continue to make ongoing investments in improving the energy efficiency of the servers, the storage, and the networking devices in the data center. Our first patents in this go back 11 or 12 years now and we continue to see with each new generation of blade system, for example, pretty substantive increases or improvements in the energy consumption and energy demands.

Gardner: Well, great. We’ve been discussing how organizations should consider consolidation, modernization and virtualization as a tag team or a combo team. The payoffs, short-, medium-, and long-term by looking through these different approaches are rather substantial. They're both immediate and have those longer-term strategic benefits baked in.

We’ve been discussing this with John Bennett. He is a worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard. I truly appreciate your insights, John.

Bennett: Well, thank you very much. I encourage all of those listening to this to take a look at what they can do in their own environments. The potential is pretty significant.

Gardner: Well, great. I also want to thank the sponsor of this podcast, Hewlett-Packard, for underwriting its production. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen
to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how IT departments can provide better services with greater efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.