Tuesday, July 21, 2009

Seeking to Master Information Explosion: Enterprises Gain Better Ways to Discover and Manage their Information

Transcript of a BriefingsDirect podcast on new strategies and tools for dealing with the burgeoning problem of information overload.

Listen to the podcast. Download the podcast. Download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Join a free HP Solutions Virtual Event on July 28 on four main IT themes. Learn more. Register.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how enterprises can better manage the explosion of information around them. Businesses of all stripes need better means of access, governance, and data lifecycle best practices, given the vast ocean of new information coming from many different directions. By getting a better handle on information explosion, analysts and users gain clarity in understanding what is really going on within the businesses, and, especially these days, across the dynamic market environment.

The immediate solution approach requires capturing, storing, managing, finding, and using information better. We’ve all seen a precipitous drop in the cost of storage and a dramatic rise in the incidents of data from all kinds of devices and across more kinds of business processes, from sensors to social media.

To help us better understand how to best manage and leverage information, even as it’s exploding around us, we’re joined by Suzanne Prince, worldwide director of information solutions marketing at Hewlett-Packard (HP). Welcome, Suzanne.

Suzanne Prince: Thanks, Dana.

Gardner: As I mentioned, things have changed rather dramatically in the past several years, in terms of the amount of information, the complexity, and the sources of that information. From your perspective, how has the world changed for the worse when it comes to managing information?

Prince: Well, it’s certainly a change for the worse. The flood is getting bigger and bigger. You’ve touched on a couple of things already about the volume and the complexity, and it’s not getting any better. It’s getting worse by the minute, in terms of new types of information. But, more importantly, we’re noticing major shifts going on in the business environment, which are partially driven by the economy, but they were already happening anyway.

We’re moving more into the collaboration age, with flatter organizations. And the way is information is consumed is changing rapidly. We live in the always-on age, and we all expect and want instant access, instant gratification for whatever we want. It’s just compounding the problems.

Gardner: I’m afraid there's a price to be paid if one loses control over this burgeoning level and complexity of information.

Prince: Absolutely. There are these horror stories that we all regularly read in the press that range from compliance and eDiscovery fines that are massive fines. And, we’re also seeing major loses of revenue.

I’ll give you an example of an oil company that was hit by hurricane Katrina in the Gulf of Mexico. Their drilling rigs were hit and damaged severely. They had to rebuild them and they were ready to start pumping, but they had to regenerate the paperwork, because the environmental compliance documentation was actually on paper.

Guess what happened in the storm -- it got lost. It took them two weeks to regenerate that documentation and, in that time, they lost $200 million worth of revenue. So, there are massive numbers associated with this risk around information.

Gardner: We’re talking about not just information that’s originating in a digital format, but information that originates in number of different formats across a number of different modalities, from rich media to just plain text. That has to be brought into a manageable digital environment.

Information is life

Prince: Absolutely. You often hear people saying that information is life -- it’s the lifeblood of an organization. But, in reality, that analogy breaks down pretty quickly, because it does not run smoothly through veins. It’s sitting in little pockets everywhere, whether it’s the paper files I just talked about that get lost, on your or my memory sticks, on our laptops, or in the data center.

Gardner: We’ve heard a lot about data management and data mining. That tends to focus on structured data, but I suppose we need to include other sorts and types of information.

Prince: Yes. The latest analyst tracker reports -- showing what type of storage is being bought and used -- reveal that the growth in unstructured content is double the growth that’s going on in the structured world. It makes sense, if you think about it, because for the longest time now, IT has really focused on the structure side of data, stuff that’s in databases. But, with the growth of content that was just mentioned -- whether it's videos, Twitters, or whatever -- we’re seeing a massive uptick in the problems around content storage.

Gardner: While we’re dealing with a hockey stick curve on volume, I suppose that the amount of time that we have to react to markets is shrinking rapidly. We’ve had an economic storm and folks have had to adjust, perhaps cutting 30-40 percent of their businesses as quickly as possible. So, in order to react to environments that are themselves changing, we can’t wait for a batch reply on some look at information from 3-10 weeks ago.

Prince: No. That comes back to what I said previously about instant gratification. In reality, it’s a necessity. Where do I shed? Where do I cut my costs? Where are the costs that I can cut and still not cut into the meat of my company? More importantly, it’s all about where are my best customers? How do I focus my sales energy on my best customers? As we all know, it costs more to get a new customer than it does to retain an old one.

Gardner: Also compounding the complexity nowadays, we’re hearing quite a bit about cloud computing. One of the promises of the vision around cloud computing

We’ve seen very good returns on investment (ROIs) ranging from 230 percent to 350 percent. We’ve seen major net benefits in the millions.

is being able to share certain data to certain applications, certain people, certain processes, but not others. So, we need to start managing how we then allow access to data at a much more granular level.

Prince: The whole category of information governance really comes into play when you start talking about cloud computing, because we’ve already talked about the fact that we’ve got disparate sources, pockets of information throughout an organization. That’s already there now. Now, you open it up with cloud and you’ve got even more. There are quality issues, security issues, and data integration issues, because you most likely want to pull information from your cloud applications or services and integrate that within something like a customer relationship management (CRM) system to be able to pull business intelligence (BI) out.

Gardner: I just spoke with a number of CIOs last week at an HP conference, and their modus operandi these days is that they need to show a return on whatever new investments they make in a matter of one or two months. They don’t have a 12- or 18-month window for return on their activities. What are the business paybacks, when one starts to do data mining, management, cleansing, storing, the whole process? When they do it right, what do they get?

Prince: We’ve seen very good returns on investment (ROIs) ranging from 230 percent to 350 percent. We’ve seen major net benefits in the millions. And, in today’s world, the most important thing is, to get the cost out and use that cost to invest for growth. There are places you can look, where you can get cost out quite quickly.

I already mentioned one of them, which is around the costs of eDiscovery. It may not be provisioned yet in the IT budget, but may be in your legal department’s budget. They are spending millions in responding to court cases. If you put an eDiscovery solution in, you could get that cost back and then reallocate that to other projects. This is one example. Storage virtualization is another one. Also outsourcing -- look into what you could outsource and turn capital expenditure into operating expenditure.

Gardner: I suppose too that productivity, when lost, comes with a high penalty. So, getting accurate timely information in the hands of your decision makers perhaps has a rapid ROI as well, but it’s not quite as easy to measure.

Right information at the right time

Prince: No, it’s not as easy to measure, but here’s something quite interesting. We did a survey in February of this year in several countries around the world. It was both for IT and line-of-business decision makers. The top business priority for those people that we talked to, way and above everything else, was having the right information at the right time, when needed. It was above reducing operating costs, and even above reducing IT costs. So what it’s telling us is how business managers see this need for information as business critical.

Gardner: I suppose another rationale for making investments, even in a tough budgetary environment, is regulatory compliance. One really doesn’t have a choice.

Prince: You don’t have a choice. You have to do it. The main thing is how can you do it for least cost and also make sure that you’re covering your risk.

Gardner: Well, we’ve had an opportunity to look at the problem set. What sorts of solutions can organizations begin to anticipate and put into place?

Prince: I touched on a few, when I was talking about some of the areas to look for cost savings. At the infrastructure layer: we’ve talked about storage. You can definitely optimize your storage -- virtualization, deduplication. You really need to look at deleting what I would call "nuisance information," so that you’re not storing things you don’t need to. In other words, if I’m emailing you to see if you’d like to come have a cup of coffee, that doesn’t need to be stored. So, optimizing storage and optimizing your data center infrastructure.

Also, we talked about the pockets of information everywhere.

You need to have a governance plan that brings together business and IT. This is not just an IT problem, it’s a business problem and all parties need to be at the table.

Another area to look at is content repository consolidation, or data mart consolidation. I’m talking about consolidating the content and data stores.

As an example, a pharmaceutical company that we know of has over 39 different content management solutions. In this situation, a) How do we get an enterprise view of what’s going on and b) What's the cost? So, at the infrastructure layer, it's definitely around consolidation, standardizing, and automating.

Then, at the governance layer, you need to look at data integration. You need to have a quality plan. You need to have a governance plan that brings together business and IT. This is not just an IT problem, it’s a business problem and all parties need to be at the table. You’re going to need to have your compliance officers, your legal people, and your records manager involved.

One of the most important things we believe is that IT needs to deliver information as a business-ready service. You need to be able to hide the complexity of all of that plumbing that I was talking about with those 39 different applications. You need to be able to hide that from your end users. They don’t care where information came from. They just want what they want in the format that they want it in, which is usually an Office application, because that’s what they’re most used to. You’ve got to hide the complexity underneath by delivering that information as a service.

Gardner: It sounds like an integration problem as well, given that we’re not going to take all these different types of information and data and put them into a single repository. It sounds as if we’re going to leave it where it is natively, but extract some values and some indexing and gain the ability to access it rather rapidly.

Prince: Yes, because business users, when they want things, want them quickly or they do it themselves. We all do it. Each one of us does it. "Oh, let’s get some spreadsheet going" or whatever. We will never be in a place where we have everything in one bucket. So, it’s always going to be federated. It’s always going to be a data integration issue. As I said, we really need to shield the end users from all of that and give them an easy-to-use interface at the top end.

Gardner: Are there any standards that have jumped out in recent years that seem more valuable in solving this problem than others?

No single standard

Prince: No, not really. There are a lot of people who keep taking runs at it. There are the groups looking at it. There are industry groups like ARMA looking at the records management. AIIM is looking at the information content management. But, there is not any one particular standard that’s coming out above the others. I would recommend, because of the complexity underneath and the fact that you will always have a heterogeneous environment, open standards are important, so that you can do more of a plug-and-play game.

Gardner: It seems that what we were doing with information in some ways is mimicking what we have done with applications around integration and consolidation. Are there means that we have already employed in IT that can be now reused or applied to this information explosion in terms of infrastructure, service orientation, enterprise service buses, or policy engines? How does this information chore align with some of the other IT activity?

Prince: It sort of lines up. You touched on something there about the applications. What you said is exactly true. People are now looking at information as the issue. Before they would look at the applications as the issue. Now, there's the realization that, when we talk about IT, there is an "I" there that says "Information." In reality, the work product of IT is information. It’s not applications. Applications are what move it around, but, at the end of the day, information is what is produced for the business by IT.

Gardner: Years ago, when we had one mainframe that had several applications, all drawing on the same data, it wasn’t the same issue it is today, where the data is essentially divorced from the application.

Prince: Yes, and you mentioned it before. It’s going to get even more so

We've definitely got the expertise and the flexible sourcing, so that we can help reduce the total cost of ownership and move expenditure around.

with cloud. It’s going to get even more divorced.

Gardner: From HP’s perspective, what do you have to bring to the table from a methods, product, process, and people perspective? I'm getting the impression that this has to be done in totality. How do you get started? What do you do?

Prince: There are two questions there. From an HP perspective, as you said, we bring the total package from our expertise and experience, which is vital in all of this. One of the main things is that you need people have done it before. They know the tricks and have got maturity models and best practices in their back pockets and they bring those out.

We've definitely got the expertise and the flexible sourcing, so that we can help reduce the total cost of ownership and move expenditure around. We've got that side of the fence and we've obviously got the adaptive infrastructure. We already talked about the data warehouse consolidation. We've got services around governance. So, we've got the whole stack. But, you also asked where to start, and the answer is wherever the customer needs to start.

Gardner: It's that big of a problem?

Increasing law suits

Prince: Yes, it is that big, and it’s going to depend. If I'm a manufacturing company I might be getting a lot of law suits, because the number of law suits have gone sky high since people are trying to get money out of enterprises any way they can. So, look for where your cost is, get that cost out, and then, as I said before, use that to fund innovation, which is where growth comes from. It's all about how you transform your company by using information.

Gardner: So, you identify the tactical cost centers, and that gives you the business rationale and opportunity to invest perhaps at a strategic level along the way, employing governance as well?

Prince: It’s like any other large project. You need to get senior executive commitment and sponsorship -- and I mean real commitment. I mean that they are involved. It’s also the old adage of "how do you eat an elephant?" You eat an elephant in small chunks. In other words, you have a strategic plan and you know where you are going, but you tackle it in tactical projects that return business benefits. And then, IT needs to be very visible in communicating the benefits they are making in each of those steps, so that it reinforces the re-investment cycle.

Gardner: Something you mentioned earlier that caught my attention was the new options around sourcing. Whether it's on-premises, modernized data center, on-premises cloud-like or grid-like or utility types of resource pools, moving towards colocation, outsourcing and even a third-party cloud provider, how does that spectrum of sourcing come into play on a solutions level for information explosion?

Prince: Again, it goes back to the strategies that we were talking about. There needs to be an underpinning strategy, and people need to look at the business values of information.

There needs to be an underpinning strategy, and people need to look at the business values of information. There is some information that you will never want outsourced. You will always want it close at hand.

There is some information that you will never want outsourced. You will always want it close at hand -- the CEO’s numbers that he is monitoring the business with. They're under lock and key in his office. It’s core business value information. There are others that you can move out. So, it’s going to involve the spectrum of looking at the business value, the security, and the data integration needs, assessing all of that, and then making your decisions.

Gardner: Are there some examples we can look to and get a track record, an approach, and learned some lessons along the way? After we have a sense of what people have done, what kind of success rates do they tend to enjoy?

Prince: Because it’s such a broad topic, it’s hard to hone in on any one thing, but I will give you an example of document processing outsourcing. It’s just an example. With the acquisition of EDS, we offer a service where we will automate the mailroom. So, when the mail comes into the mailroom, it gets digitized and then sent to the appropriate application or user. If it’s a customer complaint, it will go to the complaints department. If it’s a sales request, it will get sent to the call center.

That’s a totally outsourced environment. What all of our customers are seeing is a) reduction in cost, and b) an increase in efficiency, because that paper comes in and, once digitized, moves around as a digital item.

Gardner: We perhaps wouldn’t name names, but have you encountered situations where certain companies, in fact, found themselves at a significant deficit competitively as result of not doing the right thing around information.

Lack of information

Prince: Well, I can give you one. Actually, it’s in the public domain. So, I can name names. New Century. They were the first sub-prime mortgage company to go under in the US, and it’s publicly documented.

The bankruptcy examiner has actually written in his report that one of the major reasons they went crash was because of the lack of information at the management level. In fact, they were running their business for the longest time on Excel spreadsheets, which were not being transmitted to management. So, they were not aware of the risks that they were actually exposed to.

Gardner: We’ve certainly seen quite clear indicators that risk wasn’t always being measured properly across a number of different industries over the past several years. I suppose we would have to attribute that not only to a process, but to simply not knowing what’s going on within their systems.

Prince: Yes. I'll give you another public domain example of something from a completely different angle -- a European police database. They have just admitted -- in fact, I think it went public in February -- that they had 83 percent errors in their database. As a result of that, over a million people either lost their jobs or were fired because they were wrongly categorized as being criminals.

You have absolutely catastrophic events, if you don’t look after your quality and if you don’t have governance programs in place.

Gardner: I want to hear more about how we get started in terms of approaching a problem, but I also understand that we should have some hope

The bankruptcy examiner has actually written in his report that one of the major reasons they went crash was because of the lack of information at the management level.

that new technologies, approaches, and processes are coming out. Has there been anything at the labs level or the R&D level, where investments are being made that offer some new opportunities in terms of some of the problems and solution tension that we have been discussing?

Prince: In HP Labs, we have eight major focus areas, and I would categorize six of them as being focused on information -- the next set of technology challenges. It ranges all the way from content transformation, which is the complete convergence of the physical and digital information, to having intelligent information infrastructure. So, it’s the whole gamut. But, six out of eight of our key projects are all based on information, information processing, and information management.

I'll give you example of one that’s in beta at the moment. It’s Taxonom, which is an information-as-a-service (IaaS) taxonomy builder. One thing that is really important, especially in the content world, is the classification of the content. If you don’t classify, you can’t find it. We are in beta at the moment, but you are going to see a lot of more energy around these types of solution.

Gardner: So the majority of R&D money’s, at least at HP, is now being focused on this information explosion problem set.

Prince: Yes, yes, absolutely.

Gardner: Interesting. Well, some folks may be interested in getting some more detailed information. They perhaps have some easily identified pain points and they want to drill down on that tactical level, consider some of the other strategic approaches, and look to some of those benefits and risk reduction. Where can they go to get started?

Prince: The first one to call is your HP account representative. So, talk to them and start exploring how we can help you solve the issues in your company. If you want to just generally browse, go to hp.com. I'd also strongly recommend a sub page -- hp.com/go/imhub.

Gardner: Very good. Well, we were discussing this burgeoning problem around information explosion, along with some of the risks and penalties that unfortunately many folks suffer and some of the paybacks for those who start to get a handle on this problem.

We've also looked at some examples of winners and, unfortunately, losers and we have found some early ways to start in on this solutions road map. I want to thank our guest today. We have been talking with Suzanne Prince, worldwide director of information solutions marketing at HP. Thank you, Suzanne.

Prince: Thanks, Dana. It was a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to BriefingsDirect. Thanks and come back next time.

Listen to the podcast. Download the podcast. Download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Join a free HP Solutions Virtual Event on July 28 on four main IT themes. Learn more. Register.

Transcript of a BriefingsDirect podcast on new strategies and tools for dealing with the burgeoning problem of information overload. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, July 15, 2009

Panda's SaaS-Based PC Security Manages Client Risks, Adds Efficiency for SMBs and Providers

Transcript of a BriefingsDirect podcast on security as a service and cloud-based anti-virus protection and business models.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on automating and improving how PC security can be delivered as a service. We'll discuss how the use of cloud-based anti-virus and security protection services are on the rise, and how small to medium-size businesses (SMB) can find great value in the software-as-a-service (SaaS) approach to manage PC support.

We'll also examine how the use of Internet-delivered security provides a strong business opportunity for resellers and channel providers to the businesses trying to protect all of their PCs, regardless of location.

Recent announcements by Panda Security for cloud-based PC anti-virus tools, as well as a Managed Office Protection solution, highlight how "security as a service" is growing in importance and efficiency.

Here to help us better understand how cloud-delivered security tools can improve how PCs are protected across the spectrum of end users, businesses, resellers, and managed-service providers, we're joined by Phil Wainewright, independent analyst, director of Procullux Ventures, and a ZDNet SaaS blogger. Welcome back to the show, Phil.

Phil Wainewright: It's great to be here, Dana.

Gardner: We're also joined by Josu Franco, director of the Business Customer Unit at Panda Security. Welcome to the show, Josu.

Josu Franco: Hello, Dana. Nice to be here.

Gardner: Let's start, Josu, with looking at the big picture. The general state of PC security, the SaaS model, and the dire economy are, for many organizations, conspiring to make a cloud-based solution more appropriate, perhaps now more than ever. Tell us why a cloud-based solution approach to PC security is a timely approach to this problem.

Franco: There are two basic problems that we're trying to solve here, problems which have increased lately. One is the level of cyber crime. There are lots and lots of new attacks coming out every day. We're seeing more and more malware come into our labs. On any given day, we're seeing approximately 30,000 new malware samples that we didn't know about the day before. That's one of the problems.

The second problem that we're trying to solve for companies is the complexity of managing the security. You have systems with more mobility. You have vectors for attack -- in other words, ways in which a system can be infected. If you combine that with the usage of more and more devices in the networks, that combination makes it very difficult for administrators to really be on top of the security mechanisms they need to watch.

In order to address the first problem, the levels of cyber crime, we believe that the best approach that we, as an industry, need to take is an approach that is sustainable over time. We need to be able to address these rising levels of malware in the future. We found the best approach is to move processing power into the cloud. In other words, we need to be able to process more and more malware automatically in our labs. That's the part of cloud computing that we're doing.

In order to address the second problem, we believe that the best approach for most companies is via management solutions that are easier to administer, more convenient, and less costly for the administrators and for the companies.

Centralized approach

Gardner: Now, Phil, we've seen this approach of moving out toward the Web for services -- the more centralized approach to a single instance of an application, the ability to manage complexity better through a centralized cloud-based approach across other applications. It seems like a natural evolution to have PC security now move to a SaaS model. Does that make sense from your observations?

Wainewright: It certainly does. To be honest, I've never really understood why people wanted to tackle Web-based malware in an on-premise model, because it just doesn't make any sense at all.

The attacks are coming from the Web. The intelligence about the attacks obviously needs to be centralized in the Web. It needs to be gathering information about what's happening to clients and to instances all around the Web, and across the globe these days. To have some kind of batch process, whereby your malware protection on your PC is something that gets updated every week or even everyday, is just not fast enough, because the malware attacks are going to take advantage of those times when your protection is not up-to-date.

Really making sure that the protection is up-to-date with the latest intelligence and is able to react quickly to new threats as they appear means that you've go to have that managed in the center, and the central management has got to be able to update the PCs and other devices around the edge, as soon as they've got new information.

Gardner: So, the architectural approach of moving more back to the cloud, where it probably belongs, at least certainly from an architectural and a timeliness or a real-time reaction perspective, makes great sense. But, in doing this, we're also offloading a tremendous burden from the client in terms of these large agents, tremendous demand on the processing of this client, the need to move large files around, drag on the networks, labor for moving around the organization, and physically getting to these machines. It seems almost blatantly obvious that we need to change this model. Do you agree, Josu?

Franco: I do. One point that I want to make, though, is that when we refer to SaaS, we use the term to refer to the management console of the security solutions. So, SaaS for us is an interface for the administrator, it’s an interface obviously based on the Web.

When we refer to cloud computing, it refers to our capacity to process larger and larger volumes of malware automatically, so that our users are going to be better protected. Ideally, cloud computing and SaaS should be going together, but that's going to take a little bit of time, although, in our case at least, all of our solutions align into those two concepts. We've been moving towards that. The latest announcements that we've made about this product for consumers go certainly into that direction.

I just want to make clear that SaaS for me is one thing. Cloud computing is a different thing. They need to work together, but as a concept we should not confuse the terms.

Wainewright: That's very important, Dana. One of the key things that people misunderstand about notions of cloud computing and SaaS is this idea that everything gets sucked up into the network and you don't do anything on the client anymore.

That's actually a rather primitive way of looking at the SaaS and cloud spectrum, because the client itself is part of the cloud. It's a device that interacts with other peers in the Web environment, and it's got processing power and local resources that you need to take advantage of.

The key thing is striking the right balance between what you do on the client and what you do in the cloud, and also being cognizant of where people are at in terms of their overall installed infrastructure and what works best in terms of what they've got at the moment and what their roadmap is for future migration.

Separating SaaS and cloud

Gardner: I see. So, we do need to separate SaaS and cloud. We need to recognize that this is a balance and not necessarily an all-or-nothing approach -- neither all-cloud nor all-client. This seems to fit particularly well into the demands of an SMB, a distributed business, or perhaps even a multi-level marketing (MLM) company, where there are people working at home, on the road, in remote offices, and it's very difficult for the administrators or the managed providers or resellers to get at these machines. Moving more of that balance towards the cloud is our architectural goal.

Let's move to the actual technical solution here. Josu, you described some new products. Clearly, there's still an agent involved, coming down to the PC. I wonder if you could describe some of the two big announcements you've had, one around this consumer security cloud service, and then the second around your Managed Office Protection solution.

Franco: The announcement that we've made about the Cloud Antivirus, is a very important announcement for us, because we've been working on this for a couple of years now, and this involves rebuilding the endpoint agent from scratch.

We saw the opportunity, or, I would say, the necessity of building a much lighter agent, much faster than previous agents, and, very importantly, an agent that is able to leverage the cloud computing capacity that we have, which we call "Collective Intelligence," to process malware automatically.

As I said before, this aligns with our technology vision, which is basically these three ideas: cloud computing or collective intelligence, as we call it, regarding the capacity to process

We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

malware; SaaS as the approach that we want to take for managing our security solutions; and third, nano-architecture as the new endpoint architecture, in which we want to base all of our endpoint based solutions.

So, Cloud Antivirus is a very tiny, very fast agent that sits on the endpoint and is going to protect you by some level of local intelligence. I want to stress the fact that we don't see the agents disappearing anytime soon to protect the endpoints. We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

This works by connecting with our infrastructure and asking for file determinations, when the local agent doesn't know about a particular file that it needs to inspect.

The second announcement is more than an announcement. Panda Managed Office Protection is a solution that we've been selling for some time now, and is working very well. It works by having this endpoint agent locally in every desktop or PC or laptop. Once you've downloaded this agent, which works transparently for the end user, all the management takes place via SaaS.

It's a management console that's hosted from our infrastructure, in which any admin, regardless of where they are, can manage any number of computers, regardless of where they are located. This works by having every agent talk to this infrastructure via Internet, and to talk to other agents, which might be installed in the same network, distributing updates or distributing other types of polices.

Gardner: Now, an interesting and innovative approach here is that you've made the Cloud Antivirus agent free to consumers, which should allow them to get protection for virtually nothing, but in doing so you've increased the network population for what you can do to gather instances of problems. The agent immediately sends that back to your central cloud processing, which can then create the fix and then deliver it back out. Is that oversimplifying it?

Staying better protected


Franco: That's a very true statement. We're not the first ones giving away a security agent for free. There are some other companies that I think are using the Freemium model. We've just released this very first version of Cloud Antivirus. We're distributing it for free with the idea that first we want people to know about it. We want people to use it, but very importantly, the more people that are using it, the better protected they're all going to be. As you say, we're going to be gathering intelligence about the malware that's hitting the streets and we're going to able to process that faster and to protect all those users in real-time.

Gardner: Phil, this strikes me as Pandora opening the box. I can't imagine us going back meaningfully in the marketplace to the older methods in architecture for security. Do you agree with me that this is a compelling shift in the market?

Wainewright: It is, obviously. We're talking about network scale here. The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware that effectively lurk on devices around the Web, and are called into action to coordinate attacks.

You've got these malware providers using the collective intelligence of the Web, and if the good guys don't use the same arsenal, then they're just going to be left behind.

The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware



I think the other thing that’s great about this Freemium model is that, even though the users aren't paying anything for the software, in effect they're giving something back, because the intelligence that's being collected is making the potential protection stronger. So, it's a great demonstration of how you can derive value even from something that is actually distributed for free.

Gardner: Sort of all for one, one for all?

Wainewright: Yes, that's right.

Gardner: So, if this works well for security, it strikes me that this also makes a great deal of sense for remediation, general support, patches, upgrades, or managing custom applications. It certainly seems to me that crossing the Rubicon, if you will, into security from a cloud perspective will open up an opportunity for doing much, much more across the general total cost of ownership equation for PCs. Is that in your future? Do you subscribe to that vision, Josu?

Franco: Yes, I do. First, we've been a specialized player in the anti-malware business, but I certainly do see the opportunity to do more things once you are installing an endpoint to be able to use the same management approach and be able to configure the PC, or to do a remote session on it based on the same console. For now, we're just doing the full anti-malware and personal firewall in this way, but we do see the opportunity of doing more PC lifecycle management functionalities within it.

Gardner: That brings us back to the economy. Phil, I've heard grousing from CEOs, administrators, and just about anybody in the IT department for years about how expensive it is, from the total cost perspective, to maintain a rich PC-client experience. Nowadays, of course, we don't have a luxury of, "It would be nice to cut cost." We really have to cut cost. Do you see a significant move towards more cloud-based services as an economic imperative?

Increasing the SaaS model

Wainewright: Oh yes, and one of the interesting phenomena has been that things like help desk, security, and remote support have increasingly been delivered using the SaaS model, even in large enterprises.

If you are the chief security officer for a large enterprise that's very dependent on the Web for elements for its operations, then you've got a tremendously complex task. There's an increasing recognition that it's much better to access pools of expertise to get those jobs done, than for everyone trying to become a jack of all trades and inevitably fall behind the state of the art in the technology.

More and more, in large enterprises, but also in smaller businesses, we're seeing people turning to outside providers for expertise and remote management, because that's the most cost effective way to get at the most up-to-date and the most proficient knowledge and capabilities that are out there. So yes, we're going to see more and more of that, spot on.

Gardner: I understand how this is a benefit to end-users -- a simple download and you're protected. I understand how this makes sense for SMBs who are trying to manage PCs across distributed environment, but without perhaps having an IT department or a security expertise on staff. But, I'm not quite sure I understand how this relates now to an additional business model benefit to a reseller or a value-added provider of some kind, perhaps a managed service provider.

Josu, help me understand a little bit better how this technology shift and some of these new products benefit the channel.

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering.



Franco: In the current economic times, more and more resellers are looking to add more value to what they are offering. For them, margins, if they're selling hardware or software licenses, are getting tougher to get and are being reduced. So, the way for them to really see the opportunity into this is thinking that they can now offer remote management services without having to invest any amount in what is infrastructure or in any other type of license that they may need.

It's really all based on the SaaS concept. They can now say to the customers, "Okay, from now on, you'll forget about having to install all this management infrastructure in-house. I'm going to remotely manage all the endpoint security for you. I'm going to give you this service-level agreement (SLA), whereby I'm going to check the status of your network twice or three times a week or once a day, and if there is any problem, I can configure it remotely, or I can just spot where the problems are and I can fix them remotely."

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering. We believe that there is a clear alignment among the interests of end users and partners, and, most importantly, also from our side with the partners. We don't want to replace the channel here. What we want is to become the platform of choice for these resellers to provide these value-added services.

Gardner: Does Panda then lurk behind the scenes, the picks and shovels for solution? Do you allow them to brand around it? Are you an OEM player? How does that work?

Franco: We can certainly play with a certain level of branding. We've been doing so with some large sales that we've made, for example, here in Spain. But, most of them want to start touching and kicking the tires and see if it works. They don't need the re-branding in the first instance, but yes, we've seen some large providers who do want some customization of the interface for their logos, and that's certainly a possibility.

Gardner: We've also seen in the market more diversity of endpoints. We've seen, for cost and convenience, reason to move towards netbooks. Smartphones have certainly been a fast growing part of the mix, despite the tough economy. This model of combining the best of SaaS, the best of cloud, and a small agent coordinating and managing them, strikes me as something that will move beyond the PC into a host of different devices. Am I wrong on that Phil?

Attacking the smartphones

Wainewright: No, you're absolutely right. One of the scary things is that many of us are carrying around smartphones now. It's only a matter of time before these very capable, intelligent platforms also become vulnerable to the kind of attacks that we've seen on PCs.

On top of that, there is a great deal more support required to make sure that the users gets the best out of those devices. Therefore, we're going to see much more of this kind of remote support being provided.

For example, the expertise to support half a dozen different types of mobile devices within our organization is something that the typical small business can't really keep up with. If they're able to access a third-party provider that has got the infrastructure and the experts on how to do that, then it becomes a manageable issue again. So, yes, we're going to see a lot more of this.

Ultimately, it's going to give us a lot more freedom just to be able to get on with our jobs, without having to worry about understanding how the device works, or even worse, working out how to fix it when something goes wrong. Hopefully, there will be much fewer instances when that downtime happens.

Gardner: Well, let's hope that we nip the bud here for this malware on multiple devices in the cloud before it ever gets to the device, and that removes the whole incentive or rationale

I think that we're going to see a convergence between the world of the consumer and the world of what we call a business.

for trying to create these problems in the first place. So, maybe moving more into the cloud actually starts stanching the problem from its root and core.

Let's move forward now to some of the proof points. We've talked about this in theory. It certainly makes sense to me from an architectural and vision perspective, but what does it mean in dollars and cents? Josu, do you have any examples of organizations that have started down this path -- SMBs perhaps, and/or resellers? How has this affected their bottom line?

Franco: Yes, we do have very good examples of people who have moved along this path. Our largest installation with the Managed Office Protection product is over 23,000 seats in Europe. It's a very large school or education institution, and they're managing their entire network with just a very few people. This has considerably reduced their operating cost. They don't need to travel that much to see what's happening with their systems.

We also have many other examples of our resellers that are actually using this product, not only to manage business spaces, but also managing even consumer spaces. I think that we're going to see a convergence between the world of the consumer and the world of what we call a business.

Moving to the consumer space

Some analyst friends are talking a lot about the consumerization of IT. I think that we'll also see that consumers are going to start using technologies that perhaps we thought belonged in the business space. I'm talking, for example, about the ability for a reseller to centrally manage the PCs of consumers. This is an interesting business model, and we have some examples of this emerging trend. In the US, we have some researchers who are managing thousands of computers from their basement.

So, even though our intention was to position this product for SMBs, we do see that there are some verticalized niches in the market into which this model fits really well. Talking about highly distributed environments, what's more highly distributed than a network of consumers, everyone in their own home, right? So, I think this is definitely something that we're going to see happening more and more in the future.

Gardner: Without going down this very interesting track too much, we're starting to see some CIOs cotton to the notion of letting people pick their end device, but then accessing services back in the enterprise, and with some modest governance and security. It sounds as if a service like this might fill that role.

Then, in addition to the choice of the consumer or end user on device, it seems to me that we're also in a position now for the providers of the bit pipes -- the Internet, telephony,

The value that's being created and is being shared out by the vendors and the providers in the SaaS model is that time saving and opportunity cost saving

communications, and collaboration -- to start offering the whole package, a PC with security, remediation, protection, and you pay a flat fee per month. Do you think these two things are around the corner, Phil, or maybe three or four years out?

Wainewright: To the previous point, people often think of the consumer Web as completely separate from the business Web. In fact, the reality today is that individual users at home are just as likely to be doing business things or work things on their home PCs as they are to be doing actually home things or even side businesses on their work PCs.

If someone is auctioning off their collection of plastic toys on eBay, then are they an individual consumer or are they a business? The lines are shading. I think what you need to look at is, what is the opportunity cost? If it's going to cost me time that I can't afford, or if it's going to mean that I'm not going to be able to earn money that I could otherwise be earning, then it's going to be worth my while to pay that monthly subscription.

One of the key things that people forget, when they're comparing the cost of a SaaS solution or a Web provided solution to a conventional installed piece of packaged software, is they never look at the resource and time that the user actually spends to get things setup with the packaged software, to fix things when they go wrong, or to do upgrades.

The value that's being created and is being shared out by the vendors and the providers in the SaaS model is that time saving and opportunity cost saving.

Gardner: Now, we have to assume that the security is going to be good, because if it doesn't protect, then that's going to become quite evident. But what we're also talking about, now that I understand it better, Josu, is really we're focusing on simplicity and convenience vis-à-vis these devices, vis-à-vis security, but also in the larger context of the level of comfort, of trust that the device will work, that the network will be supported, and that I'm not going to run into trouble. Is that what we're really talking about here as a value proposition -- simplicity and convenience?

Franco: As you said, it needs to protect. It needs to be very effective at a time when we're seeing really huge amounts of malware coming out every day. So, that's preconditioned. It needs to protect.

But if it's something that is going to be there protecting users, and many users see security as something that they need to live with, it's not truly something that they see as a positive application that they have. It's something that sometimes annoys people. Well, let's make it as simple, as transparent, as fast, as imperceptible as possible, and that's what this is all about.

Gardner: Very good. We've been learning a lot today about PC security and how it can be delivered as a service in conjunction with the cloud-based central management and processing. This architectural approach is now quite prominent for security, and perhaps will become more prominent across other aspects of client device support and convenience and lower cost and higher trust. So a lot of goodness. I certainly hope it works out that way.

Cost and protection benefits, along with productivity benefits, and as a result less downtime, is a good thing. We've looked at it across the spectrum of end users and businesses, resellers, and managed service providers. Helping us understand this we've been joined by our panel. I want to thank them. Phil Wainewright, independent analyst, director of Procullux Ventures, and a ZDNet SaaS blogger. I appreciate your time, Phil.

Wainewright: It's been great to be with you today, Dana.

Gardner: We've also heard from Josu Franco, director of the Business Customer Unit at Panda Security. Thank you Josu.

Franco: It's been my pleasure, thanks.

Gardner: I also want to thank the sponsor of this discussion, Panda Security, for underwriting its production.

This is Dana Gardner, principal analyst at Interarbor Solutions, thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Transcript of a BriefingsDirect podcast on security as a service and cloud-based anti-virus protection and business models. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, July 14, 2009

Rethinking Virtualization: Why Enterprises Need a Sustainable Virtualization Strategy Over Hodge-Podge Approaches

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on rethinking virtualization. We’ll look at a series of three important considerations when moving to enterprise virtualization adoption.

First, we'll investigate the ability to manage and control how interconnections impact virtualization. Interconnections play a large role in allowing physical servers to support multiple virtual servers, which themselves need multiple network connections. The connections themselves can be virtualized, and we are going to learn how HP Virtual Connect is being used to solve these problems.

Second, we're going to examine the role and importance of configuration management databases (CMDBs) in deploying virtualized servers in production. When we scale virtualized instances of servers, we need to think about centralized configuration, it really helps in bringing management to this crucial part of preventing server sprawl and an unwieldy complexity that can often impact the cost of virtualization projects.

Last, we're going to dig into how outsourcing in a variety of different forms, configurations, and values could help organizations get the most bang for their virtualization buck. That is to say, how they think about virtualization not only in terms of placement, but also in where that data center and even hybrid data centers will be residing and managed.

Here to help us to dig into these essential ingredients of successful and cost-effective virtualization initiatives, are three executives from Hewlett-Packard (HP).

When we scale virtualized instances of servers, we need to think about centralized configuration

We're going to be speaking with Michael Kendall, worldwide Virtual Connect marketing lead. We're also going to be joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions. And last, we're going to discuss outsourcing with Ryan Reed, a product manager for EDS Server Management Services.

First, I want to talk a little bit about how organizations are moving to virtualization. We certainly have seen a lot of the "ready, set, go," but when organizations start looking at the complexity, when they think about scale, when they think about the need to do virtualization for the economic pay-off, rather than simply moving one shell around from physical to virtual, or from on-premises to off-premises, the complexity in the issue starts to sink in.

Let me take our first question to Shay Mowlem. Shay, what is it that we're seeing in terms of how companies can make sure that they get a pay-off economically from this, and that it doesn’t become complexity-for-complexity's sake?

Shay Mowlem: The allure of virtualization is quite great. Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

The fact that virtualized infrastructure has more interdependencies means there’s more of a risk profile because of the services that are supported. The real challenge for those companies is putting in place the right management platform in order to be able to truly recognize those gains for those production environments.

Gardner: So, when we talk about rethinking virtualization, I suppose that it really means planning and anticipating how this is going to impact the organization and how they can scale this out?

Mowlem: Yeah. That’s exactly right.

Looking at connections

Gardner: First, we're going to look at the connections, some of the details in making physical servers become virtual servers, and how that works across the network. Mike Kendall is here to tell us about HP’s Virtual Connect technology.

It’s designed to help bridge the gap between the physical world and virtual world, when it comes to the actual nitty-gritty of making networks behave in conjunction with increased numbers of virtualized server instances. This is important when we start rethinking virtualization in terms of actually getting an economic payback from the investments and the expectations that enterprises are now supporting around virtualized activities.

So, let me take it to you Mike. When we go to virtualized infrastructures from traditional physical ones, what’s different about migrating when it comes to these network connections?

Michael Kendall: There are a couple of things. When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

Interestingly, people have found that as you do that, it has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need to accommodate all that extra workflow that’s going on that sever.

So, just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs. But, even when you move it, you have to take down and re-setup those particular network connections. Being able to do that in a way that is harmonious is more challenging within a virtual machine environment.

Gardner: So, it’s not quite as easy as simply managing the hypervisor. We have to start thinking about managing the network. Perhaps you could tell us more about how the Virtual Connect product itself does that.

Basic rethinking


Kendall: Absolutely. Virtual Connect is a great example to follow. HP helps you achieve the full potential you get out of setting up virtual machines on a server and being able to consolidate all those workloads.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Rather than taking hours, days, or even weeks to get a move set up, by either setting up, adding to or moving virtual machines or physical machines, we're able to take that down literally to minutes. The result is that most deployments or moves can be accomplished a whole lot faster.

Another part of this is our new Flex-10 technology. That takes a 10-gigabit Ethernet connection and allocates that across four NIC connections. This eliminates the need for additional physical NICs in the forms of mezzanine cards or stand-up cards, additional cables, or additional switches, when setting up all of the extra connections required for virtual machines.

The average hypervisor is looking for anywhere from three to six NIC connections, and approximately two storage network connections.

If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

If you add that all up, that can be up to a total of six to eight NICs, along with the associated cables and switch ports. The same thing is true with the two storage network connections as well.

With Flex-10, on an average two-port NIC, you can have each one of those ports be able to leave four NICs for a total of eight, without having to add any additional stand-up cards, any additional switches, or the cables with it. As a result, from a cost standpoint, you can save up to 66 percent in additional network equipment cost over competing technology. So, with Virtual Connect you can wire everything once and then add, replace, or recover servers a whole lot faster.

Gardner: And, of course, not doing this in advance would erode your ability to save when it comes to these more utilized server instances.

Kendall: That’s also correct. If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

Gardner: One of the things that folks like about virtualization is an automated approach to firing off instances of servers to support an application -- for example, a database. Does that automated elasticity of generating additional server instances follow through with the Virtual Connect technology so that it’s, in a sense, seamless.

Seamless technology

Kendall: I'm glad you added in the Virtual Connect part, because if you had said "using standard switch technology," the answer to that would be no.

With standardized switch technology and standardized NIC and storage area network (SAN) HBA technology, you generally have to set up all these connections individually. Then, you have to manage them individually. Then, if you set up, add to, or migrate virtual machine instances from the virtual machine (VM) side of it, you can automate a lot of that through a hypervisor manager, but that does not extend to the attributes of the actual server connection, or the virtual machine connection.

Virtual Connect, because it does virtualize those instances in a way that you manage them, makes it very straightforward to migrate the server connections and their profiles, not only with the movement of virtual machines, but also the movement of whole hypervisors across physical machines as well. It extends the physical and the virtual, and handles the automation and the migration of all those connection profiles.

Gardner: So, we're gaining some speed here. We’re gaining mobility. We're able to maintain our cost efficiencies from the virtualization, because of our better management of these network issues, but don’t such technologies as soft switches pretty much accomplish the same thing?

Kendall: Soft switches can be an important part of the infrastructure you put together around virtual machines. One of the things about soft switches is that it’s really important how you use them. If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network. If you use Virtual Connect, which is based upon industry-standard protocols together with a soft switch operating in a simple pass-through type of mode, then you don’t have the latency problem. You maintain the flexibility of Virtual Connect.

The other thing you need to be careful of is that some of the new soft switches out there use proprietary protocol extensions to accomplish the ability to

If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network.

track the movement of the virtual machine, along with its associated connection protocol. These proprietary protocol extensions sometimes require upstream products that can accept these protocol extensions and require new hardware, switches, and management tools. That can add a lot to the cost to upgrading an infrastructure.

Gardner: Thank you Michael. We’re now going to look at another important issue around virtualization, and that is configuration and management. This has become quite an issue in terms of complexity. Managing the physical servers, when we get into the large numbers, is, in itself, complex. When we add virtualization and dynamic provisioning and look to recover cost from energy and utilization, we add yet another dimension to the complexity.

We’re going back to Shay Mowlem. We’re going to talk a little bit about this notion of data collection, management, configuration, and automation along this line. So, we'll talk about how visibility into the requirements of what’s going on in the virtualization instances, data centers, and across the infrastructure becomes critical. How are companies gaining better visibility across the virtualized data center, compared to what they were perhaps doing to the purely physical ones?

Mowlem: IT infrastructures really are becoming more ambiguous. With the addition of virtual machines to data centers that are already leveraging other virtualization technologies in their storage area networks -- virtual LANs and so on -- all of that makes knowing where a problem exists much harder to identify and fix. That has an impact on management cost and service quality.

Proof for the business

For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost, as they incorporate virtualized servers and storage to support the systems. We've seen that the ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

It also needs to include the ability to monitor the health of the infrastructure, but also from the perspective of the business service. In other words, be able to monitor and understand all of the infrastructure elements, how they relate to one another, servers, networked storage, and then also be able to monitor the health and the performance of the service from the perspective of the business user.

It's sort of a bottom-up and top-down view if you will, and this is an area that HP Software has invested in very heavily. We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis.

We also track what has changed over time, what was implemented, and who made those changes. Then, we can leverage that information very carefully to solve important questions with regards to how a particular service has been behaving over time?

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

This can be quite profound. We found that through an return on investment (ROI) model that we worked on, based on data from IDC, that effective utilization of HP’s Discovery and Dependency Mapping technology and storing that in a central UCMDB, on average can help reduce the mean time to repair of outages by 76 percent, which is a massive benefit through effective consolidation of this important data.

Gardner: Maybe I made a mistake that other people commonly make, which is to think of managing virtualized instances as separate and different. But, I suppose virtualization nowadays is becoming like any other system across the IT infrastructure.

Mowlem: Absolutely. It’s part of a mix of tools and capabilities that IT has that, in production environments, are ultimately there to support the business. Having an understanding of and being able to monitor all these systems, understanding their interdependencies, and managing them in an integrated way with the understanding of that business outcome, is a key part of how companies will be able to truly recognize the value that virtualization has to offer.

Gardner: Okay, I think we understand the problems around this management issue in trying to scale and bring it into a similar way in which the entire data center is managed. What about the solutions? What particularly didn’t organizations consider when approaching this total configuration issue?

Business service management

Mowlem: We offer a host of solutions that help companies manage virtualized environments end to end, but as we look at monitoring -- and essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time -- we talk about the business service management portfolio of HP Software. This includes the Discovery and Dependency Mapping product that I talked about earlier. UCMDB is a central repository, and a number of tools allow our customers to monitor their infrastructure at the server level, at the network level, but also at the service level, to ensure ongoing health and performance of their environment.

Gardner: You mentioned these ROI figures. Typically, is there any comparison to how organizations will start down the virtualization path? How they can then begin to recover more cost and cut their total cost by adopting some of these solutions?

Mowlem: We offer a very broad portfolio of solutions today that manage many different aspects of virtualization, from testing to ensuring that the performance of a virtualized environment in fact meets the business service level agreements (SLAs). We talked about monitor already. We have automation as part of our portfolio to achieve efficiency in provisioning and change execution. We have a solution to manage assets, so that software licenses are tracked carefully and properly.

We also have a market-leading solution in backup recovery with our Data Protector offering to help customers scale their backup and recovery capabilities across their virtualized

We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

servers. What we’ve found in the course of our discussions is that there are many customers that recognize that all of these are critical and important areas for them to be able to effectively incorporate virtualization into their production environments.

But, generally there are one or two very significant pain areas. It might be the inability to monitor all of their servers -- physical and virtual -- through one single pane of glass, or it maybe related to compliance enforcement, because there are so many different elements out there. So, the answer isn’t always the same. We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

Gardner: Well, I suppose it’s never too late to begin. If you’re even partially into a virtualization initiative, or maybe even deep in and you’re starting to having problems, there are ways in which you can bring in management features at any particular point in that maturity.

Mowlem: We definitely support a very modular offering that allows people to focus on where they’re feeling the biggest pain first, and then expand from there as it makes sense to them.

Gardner: Let’s now move over to Ryan Reed at EDS. As organizations get in deeper with virtualization and as they consider on a larger scale their plans for their modernization and consolidation and overall cost efficiency of their resources, how do they approach this problem of placement? It seems that when you move towards virtualization it almost forces you to think about your data center in a more holistic and long-term and strategic perspective.

Raising questions

Ryan Reed: Right, Dana. For, a lot of companies when they consider large-scale virtualization and modernization projects, it often raises questions that help them to devise the plan and devise strategy around how they’re going to create a virtual infrastructure and where their infrastructure is going to be located.

Some of the questions that I see are around the physical data center itself. Is the data center itself meeting the needs of the business? Is it designed in a way that can be built for resiliency and provide the greatest value to the business services?

You’ll also find that a lot of times that’s not the case nowadays for the data centers that were built 10 or 15 years ago. Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Another question that often comes up is around the storage network infrastructures. Where they are located physically. Are they in the right place? Is it available at the right times? A lot of organizations may be required by legislative regulatory requirements to keep their data within a particular state or country, national boundaries, or region. A lot of the times, when people are planning for virtual server infrastructures, that comes to be a pretty prominent discussion.

Another one would be around internal skill sets of the enterprise. Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

Lastly, a lot of companies, when they’re doing virtualization projects, start to question, whether or not all of the activities around managing the infrastructure is actually core to their business. If it’s not core to their business, then maybe this is something that they don’t have to be doing themselves anymore.

Taking all that into consideration helps to drive a conversation around planning and being able to create the right type of process. Often times, it leads to a discussion around outsourcing. EDS, which is an HP company, does provide organizations and enterprises for full IT management, and IT infrastructure management. That includes everything from the implementation to ongoing management of virtual as well as non-virtual infrastructure environments.

Client data center or on-premises you called it, Dana, is an option that is available for a lot of enterprises out there that have already invested heavily into their current data-center facility, as well as the infrastructure. They don’t want to necessarily move it to an outsourcer supplied data center. So on-premises is a business model that’s available and becoming common for some of the larger virtualization projects.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So, they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase.

Making the most sense

This makes the most sense for these types of organizations, because you’re going to be doing a migration from physical to virtual anyway. So, you might as well take advantage of the skills that are available from the outsourcing services provider to move that to their data center, and have them apply best-in-breed practices and technology to manage that infrastructure.

Then you also mentioned what would be considered like a hybrid model, which would be one where virtual infrastructure and non-virtual infrastructure can be managed from either client or organization-owned data center, or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Gardner: Is there anything about virtualizing your data center and more and more servers that makes outsourcing perhaps easier or an option that some people that hadn’t considered in the past and should?

Reed: Sure. Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership (TCO).

Doing the actual implementation, the ongoing management of the operating systems, both virtual and non-virtual for guests and hosts, patching of the system,

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

monitoring to make sure that the systems are up and running responding to events, escalating events, and then doing things like backup and restore activities of the systems are really core to an outsourcing services provider’s business. That’s what they do.

We don’t expect our clients to have the same level of expertise as EDS does. We’ve been doing this for 45 years, and it’s really the critical piece of what we do. So, there are many things to consider when choosing an outsourcing provider, if that’s the way to go. Benefits can range dramatically from reducing your TCO to increasing levels of availability within the infrastructure, and then also being able to expand and use the services provider, global delivery service centers that are available around the world.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

Additionally, you can take advantage of things like low-cost delivery centers that the services provider has built up over the years -- services centers that are from low-cost regions. EDS considers this to be the best strategy. Having resources available in low-cost countries to provide the greatest value to clients is important when it comes to understanding the best approach to selecting a good services provider.

Gardner: So, for those organizations that are looking at these various options for sourcing, how do they get started? What’s a good way to begin that cost benefit analysis?

Reed: Well, there’s information available through the eds.com website. Go there and search on "virtualization" and you’ll find the first search result that comes back that has lots of information around what to expect in terms of an engagement, as well as examples of where virtualization has been done with other organizations similar to what a lot of industries are facing out there.

You can see a comparison of like-for-like scenarios to determine whether or not a client engagement would make sense here, based on the case studies and success stories that are available out there as well. There are also industry tools that are available from our partner organizations. HP has tools available. VMR has tools available to help our clients understand where savings can come from. And, of course, EDS is also available to provide those types of services for our clients too.

Gardner: Okay. We’ve been looking at three important angles to consider when moving to virtualization, being aware at a detail level how the network, interfaces and connects work, moving towards more virtualized approach to interconnects. We also looked at the management issues -- configuration not only in the terms of how virtualized servers stand alone. They need to be managed, but managed in total, in terms of the part of the larger IT mix. We also looked at how to consider some different options in terms of cost and skills, availability of resources, energy cost, and general track record of being competent and proven with virtualization in terms of various sourcing options.

I want to thank our three guests today. We’ve been joined by Michael Kendall, worldwide Virtual Connect marketing lead at HP. We've been joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions, and Ryan Reed, product manager for EDS Server Management Services.

This is Dana Gardner, principal analyst at Interarbor Solutions, we want also to thank the sponsor of our podcast discussion today, Hewlett-Packard, for underwriting its production. Thanks for listening and come back next time.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Download a pdf of this transcript.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.