Showing posts with label desktop. Show all posts
Showing posts with label desktop. Show all posts

Wednesday, January 07, 2009

Webinar: Analysts Plumb Desktop as a Service as the Catalyst for Cloud Computing Value for Enterprises and Telcos

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.

Listen to the webinar here.

Jeff Fisher: Hello and welcome, everyone. Thanks so much for attending this Desktone Webinar Series entitled, “Desktops as a Service, The Evolution of Corporate Computing.” I’m Jeff Fisher, senior director of strategic development at Desktone, and I will also be the host and moderator of the events in this series.

We are really excited to kick off this series of webinars with one focused on cloud-hosted desktops and are equally as excited and privileged to have just a wonderful panel with us starting with Rachel Chalmers from the 451 Group, Dana Gardner from Interarbor Solutions, and Robin Bloor from Hurwitz and Associates.

For those of you who don’t know, Rachel, Dana and Robin are really three of the top minds in this emerging cloud-hosted desktop space. It’s going to be great to see just what they have to say about the topic and we’ll talk to them just a little bit later on.

Before we do that, I want to spend a little bit of time talking about Desktone’s vision and definition of cloud-hosted desktops and, most importantly, about why we believe that virtual desktops, as opposed to virtual servers, are really going to kick-start adoption of cloud computing within the enterprise.

Desktone is a venture-backed software company. We’re based outside of Boston in a town called Chelmsford. We raised $17 million in series A in a round of funding in summer 2007. Highland Capital and Softbank Capital led that round. We also got an investment at that time from Citrix Systems.

We’re currently about 35 full-time employees and have 25 full-time outsourced software developers. The executive team has experience leading desktop virtualization vendors such as Citrix, Microsoft and Softricity, and also experience running Fortune 500 IT organizations at Schwab and Staples.

We have a number of technology partners in the area of virtualization software, servers, storage and thin clients, and some key service provider partnerships with HP, IBM, Verizon and Softbank. What’s really important to note here is that Desktone actually goes to market through these service provider partners.

We don’t host virtual desktops ourselves, but rather the desktops as a service (DaaS) offering that we enable is provided through service provider partners. The only services that we host ourselves, or are offered directly, are trial and pilot services.

We built a platform called the Desktone Virtual-D Platform. It’s the industry’s first virtual desktop hosting platform specifically designed to enable desktops to be delivered as an outsourced subscription service.

And what’s important to understand is that this platform is designed specifically for service providers to be able to offer desktop hosting in the same way that they offer Web hosting or e-mail hosting.

We architected the Virtual-D Platform from the ground-up with that mission in mind. It’s a solution for running virtualized, yet genuine, Windows client environments, whether XP or Vista, in a service provider cloud. We’ll talk more about how we define a service provider cloud in a bit.

It leverages a core virtual desktop infrastructure (VDI) architecture, that is, server-hosted desktop virtual machines, which are accessed by users through PC remoting technologies like remote desktop protocol (RDP), for example. The Virtual-D Platform enables cloud-scale and multitenancy, which are two of the key things that a service provider needs to have to be able to be in this business.

Without getting into too much detail, it’s not really viable to take an enterprise VDI architecture or a product that’s been architected to deliver enterprise VDI and just port it over for service provider use.

It’s not viable for service providers to manage individual instances of VDI products. They really need a platform to manage this efficiently and effectively. The other key thing that the Virtual-D Platform does is separate the responsibilities of the user, the enterprise desktop administrator, and the service provider hosting operator, so that each of these constituents has their own view into the system, through a Web-based interface of course, and can do what they need to do without seeing functions and capabilities that are only really required by some of the other groups.

So, it’s a very, very different technology approach, although the net result appears in certain ways to be similar to some of the VDI platforms that you probably know pretty well.

So, that’s Desktone in a nutshell.

Promise of the cloud

Let’s get to the promise of the cloud. Clearly, everyone is talking about cloud computing. You can’t look anywhere within IT and not hear about it. It’s amazing to see it surpassing even the frenzy around virtualization. In fact, most of the conversations people are having today are around virtualization and how it can take place in the cloud. Everyone wants to focus on all the benefits, including anytime/anywhere access and subscription economics.

However, like any other major trend that unfolds in IT, there are a number of challenges with the cloud. When people talk about cloud computing with respect to the enterprise, in most cases they’re talking about virtualizing server workloads and moving those workloads into a service provider cloud.

Clearly, that shift introduces a number of challenges. Most notable is the challenge of data security. Because server workloads are very tightly-coupled with their data tier, when you move the server or the server instance, you have to move the data. Most IT folks are not really comfortable with having their data reside in a service provider or external data center.

For that reason Desktone believes that it’s actually going to be virtual desktops, not servers, that are the better place to start and are going to be what jump starts this whole enterprise adoption of cloud computing.

The reason is pretty simple. Most fixed corporate desktop environments -- those are desktops that have a permanent home within your enterprise – already probably have their application and user data abstracted away from the actual desktop. The data is not stored locally. It’s stored somewhere on the network, whether it’s security credentials within the active directory (AD), whether it’s home drives that store user data, or it’s the back end of client-server applications. All the back end systems run within your data center.

When you shift that kind of environment to the cloud, although the desktop instance has moved, the data is still stored in the enterprise data center. Now, what you are left with are virtual desktops running in a highly secure virtual branch office of the enterprise. That’s how we like to refer to our service-provider partners’ data centers, as secure virtual branch offices of your enterprise.

In addition, if you virtualize and centralize physical PCs, which used to reside in remote branch offices that have limited or no physical security, you’ve actually increased the security of the environment and the reasons are clear. The PC can no longer walk off, because it doesn’t have a physical manifestation.

Because users are interacting with their virtual desktops through PC remoting technology, a la Remote Desktop Protocol (RDP), you have control as an administrator as to whether or not they can print to their access device or through their access device, whether or not they can get access to USB devices, USB key fobs that they put into that device.

So, you can control the downstream movement of data from the virtual desktop to the edge. You can also control the upstream movement of data from the edge to the virtual desktop and stop malware and viruses from being introduced through USB keys as well.

Those are some really nice benefits. We have an animation that illustrates this, showing a physical desktop PC, which accesses its data from the enterprise data center – again whether it’s AD or user data or the actual apps themselves. Then, what happens is the actual PC is virtualized and centralized into a service-provider cloud, at which point it’s accessed from an access device, whether that’s a thin client or a thick client, a PC that’s been repurposed to act like a thin client, or a dumb terminal and/or laptops as well.

The key message here is that, although the instance of the desktop is moved, the data does not have to move along with it. Through private conductivity between the enterprise and the service provider, it’s possible to access the data from the same source.

Service-provider cloud

The other interesting thing is this notion of the service-provider cloud, which is that it actually can traverse both the enterprise and the service provider data centers.

So, depending on the use case, service providers can either keep the virtual infrastructure and the racks powering that virtual infrastructure in their data center or they can, in certain cases, put the physical infrastructure within the enterprise data center, what we call the customer premise equipment model. The most important thing is that it doesn’t break the model.

There is flexibility in the location of the actual hosting infrastructure. Yet, no matter where it resides, whether it’s in a service provider data center or an enterprise data center, the service provider still owns and operates it and the enterprise still pays for it as a subscription.

Let’s touch on just a couple of other benefits and then we’ll jump into talking with our panel. The Desktone DaaS cloud vision preserves the rich Windows client experience in the cloud. This is true, blue Windows -- XP or Vista -- not another form of Windows computing whether it’s shared service in the form of Terminal Services and/or browser-based solutions like Web OSes or webtops.

That’s important, because most enterprises have the Windows apps that they need to run and they don’t want to have to re-architect and re-engineer the packages to run within a multi-user environment. They certainly don’t want a browser-based environment where they can’t run those apps.

In the same vein, this sustains the existing enterprise IT operating model, while introducing cloud-like properties so that IT desktop administrators can continue to use the same tools and processes and procedures to support the virtual desktops in the cloud as they have done and as they will continue to do with their physical desktops.

We talked about the notion of separating service provider enterprise responsibilities. It’s really important to be able to draw a line and say that the service provider is responsible for the hosting infrastructure, and yet the enterprise is still ultimately responsible for the virtual desktops themselves, the OS images, the patching, the licensing, the applications, application licensing, etc.

And then, finally, this notion we mentioned of combining both on- and off- premise hosting models is important. I think most of the leading analyst firms agree that in order for enterprises to be able to adopt cloud computing they are not going to be able to go from a fully enterprise data center model to a full cloud model. There’s got to be some sort of common ground in between and, again, the fact this model supports both is important.

Now, let’s turn to our panel and see what they have to say. We’ll get started with Rachel Chalmers who is research director of infrastructure management at the 451 Group. She’s led the infrastructure software practice for the 451 Group since its debut in April 2000.

She’s pioneered coverage on services-oriented architecture (SOA), distributed application management, utility computing and open-source software, and today she focuses on data center automation server, desktop, and application virtualization. Rachel, thank you so much for being with us today.

Rachel Chalmers: You’re very welcome. It’s good to be here.

Fisher: Rachel, I actually credit you with being the first analyst to really put cloud-hosted desktop virtualization on the map and the reason is because you’ve written two really expansive and excellent reports on desktop virtualization. The first one you released in the summer of 2007. The follow-up one was released this past summer of ’08.

What I’ve really found interesting was that in the updated version you actually modified your desktop virtualization taxonomy to include cloud-hosted desktops as a first-class citizen, so to speak, alongside client-hosted desktop virtualization and server-hosted desktop virtualization. Of course that begs the question, what was so compelling about the opportunity that made you do that?

Taxonomy is key

Chalmers: Taxonomy is the key word. For those who aren’t familiar with The 451 Group, we focus very heavily on emerging and innovative technology. We do a ton of work with start-ups and when we work with public companies, it’s from the point of view of how change is going to affect their portfolio, where the gaps are, who they should buy. So we’re very much the 18th century naturalists of the analyst industry. We’re sailing around the Galapagos Islands and noting intriguing differences between finches.

I know we described cloud-hosted desktop virtualization as one of these very constructive differences between finches. When I sat down and tried to get my arms around desktop virtualization, it was just at the tail end of 2007 and, as you’ll recall, just as it’s illegal for a vendor to issue a press release now without describing their product as a cloud-enablement product. In 2007, it was illegal to issue a press release without describing a product as virtualization of some kind.

I was tracking conservatively 40 to 50 companies that were doing what they described as desktop virtualization and they were all doing more or less completely different things. So, the first job as a taxonomist is to sit down and try and figure out some of the broad differences between companies that claim to be doing identical things and claim to deliver identical functionality. One of the easiest ways to categorize the true desktop virtualization guys, as opposed to the terminal services or application streaming vendors, was to figure out exactly where the virtual machine (VM) was running.

So I split it three ways. There are three sensible places to run a desktop virtual machine. One is on the physical client, which gives you a whole bunch of benefits around the ability to encrypt and lock down a laptop and manage it remotely. One is to run it on the server, which is the very similarly tried-and-tested VM with VDI or Citrix XenDesktop method. That’s appropriate for a lot of these cases, but when you run out of server capacity or storage in the server-hosted desktop virtualization model a lot of companies would like elastic access to off-site resources.

This is particularly appropriate, for example, for retailers who see a big balloon in staffing – short-term and temporary staffing around the holiday seasons, although possibly not this year -- or for companies that are doing things off-shore and want to provide developer desktops in a very flexible way, or in education where companies get big summer classes, for example, and want to fire up a whole bunch of desktops for their students.

This kind of elastic provisioning is exactly what we see on the server virtualization side around cloud bursting. On the desktop side, you might want to do cloud bursting. You might even want to permanently host those desktops up in the cloud with a hosting provider and you want exactly the same things that you want from a server cloud deployment. You want a very, very clean interface between the cloud resources and the enterprise resources and you want a very, very granular charge back in billing.

And so, we see cloud-hosted desktop virtualization as a special case of server-hosted desktop virtualization. Really, Desktone has been the pioneer in defining what that interface should look like, where the enterprise data should reside, where AD, with its authentication and authorization functions, should reside, and what gets handled by the service provider and how that gets handled by the service provider.

Desktone isn’t the only company in cloud-hosted desktop virtualization, but it’s certainly the best-known and it’s certainly done the best job of articulating what the pieces will look like and how they’ll work together.

Fisher: Great.

Chalmers: It’s a very impressive finch.

Fisher: Always appreciate it. Dana and Robin, do you have any additional comments on what Rachel had to say?

New era in compute resources

Dana Gardner: Yes, I think we’re entering a new era in how people conceive of compute resources. To borrow on Rachel’s analogy, a lot of these finches have been around, but there hasn’t been a lot of interest in terms of an environment where they could thrive. What’s happening now is that organizations are starting to re-evaluate the notion that a one-size-fits-all PC paradigm makes sense.

We have lots of different slices of different types of productivity workers. As Rachel mentioned, some come and go on a seasonal basis, some come and go on a project basis. We’re really looking at a slice-and-dice productivity in a new way, and that forces the organization to really re-evaluate the whole notion of application delivery.

If we look at the cost pressures that organizations are under, recognizing that it’s maintenance and support, and risk management and patch management that end up being the lion’s share of the cost of these systems, we’re really at a compelling point where the cost and the availability of different alternatives has really sparked sort of a re-thinking.

And a lot of general controlled-management security risk avoidance issues require organizations to increasingly bring more of their resources back into a server environment.

But, if you take that step in virtualization and you look at different ways of slicing and dicing your workers, your users, if you can virtualize internally – well then we might as well make the next step and say, “What should we virtualize externally?” “Who could do this better than we can at a scale that brings the cost down even higher?”

This is particularly relevant if they’re commodity level types of applications and services. It could be communications and messaging, it could be certain accounting or back office functions. It just makes a lot of sense to start re-evaluating. What we haven’t seen, unfortunately, is some clear methodologies about how to make these decisions and boundaries inside of organizations with any sort of common framework or approach.

It’s still a one-off company by company approach -- which workers should we keep on a full-fledged PC? Who should we put on a mobile Internet device, for example? Who could go into a cloud-based applications hosting type of scenario that you’ve been describing?

It’s still up in the air and I’m hoping that professional services and systems integrators over the next months and years will actually come up with some standard methodologies for going in and examining the cost-benefit analysis, what types of users and what types of functions and what types of applications it makes sense to put into these different finch environments.

Fisher: Absolutely. I couldn’t agree more and I’ve always been one who talks about use cases. It all comes down to the use cases.

The technology is great, the innovation is great but especially in the case of desktop usage you really have to figure out what people are doing, what they need to do, what they don’t need to be doing at work but are currently doing, and that’s the whole notion of this how the consumerization piece fits in and personal life melds in with business life. You can say, well, this person doesn’t need to do that but if they are today you need to figure out how to make that work and how to take that into account.

So, I agree with you. It’d be fantastic to get to a world where there was just a better way to have better knowledge around use cases and which ones fit with which delivery models.

Chalmers: I think that’s a really crucial point. Just as server and work cloud virtualization have transformed the way we can move desktops and servers around, I see a lot of really fascinating work being done around user virtualization.

Jeff, you talked a lot about the issue of having user data stored separately from the dynamic run-time data. I know you’ve done a lot of work with AppSense within Merrill Lynch. There’s a group of companies -- AppSense, RTO Software, RES, and Sansa -- that are all doing really interesting work around maintaining that user data in a stateful way, but also enabling IT operators to be able to identify groups of users who may need different form factors for their desktop usage and for their work profile.

Buy side perspective

Gardner: We’ve been looking at this from the perspective on the buy side where it makes a lot of sense, but there’s also some significant momentum on the sell side. These organizations that are perhaps traditional telcos, co-location, or hosting organizations, cloud providers or some ecology of providers that actually run on someone else’s cloud but have a value-added services capability of some sort.

These are on the sell side and they’re looking for opportunities to increase their value, not just to small to medium-sized businesses but to those larger enterprises. They’re going to be looking and trying to define certain classes of users, certain classes of productivity and work and workflow, and packaging things in a new and interesting way.

That’s the next shoe to fall in all of this: the type of customer that you have there at Desktone. It’s incumbent upon them now to start doing some packaging and factoring the cost savings, not just on an application-on-application basis but more on a category of workflow business process work and do the integration on the back-end across.

Perhaps that will involve multiple cloud providers, multiple value-added services providers, and they then take that as a solution sell back into the enterprise, where they can come up with a compelling cost-per-user-per-month formula. It’s recurring revenue. It’s predictable. It will probably even go down over time, as we see more efficiency driven into these cloud-based provisioning and delivery systems.

So, there’s a whole new opportunity for the sellers of services to package, integrate, add value, and then to take that on a single-solution basis into a large Fortune 1000 organization, make a single sale, and perhaps have a customer for 10 or 15 years as a result.

Chalmers: It is a tremendously exciting opportunity for our managed-hosting provider clients. It’s the dominating topic of conversation at a lot of the events that we run for that group. Traditionally, a really, really great managed hoster that delivers an absolutely fantastic service will become the beloved number one vendor of choice of the IT operator.

If that managed hosting provider can deliver the same quality of service on the desktop, then they will be the beloved number one vendor of everybody up to and including the CIO and the CEO. It’s a level of exposure they’ve just never been able to aspire towards before.

Robin Bloor: I think that’s probably right. One of the things that is really important about what’s happening here with the virtualization of the desktop is just the very simple fact that desktop costs have never been well under control. The interesting thing is that with end users that we’ve been talking to earlier this year, when they look at their user populations, they normally come to the conclusion that something like 70 or 80 percent of PC users are actually using the PC in a really simple way. The virtualization of those particular units is an awful lot easier to contemplate than the sophisticated population of heavy workstation use and so on.

With the trend that’s actually in operation here, and especially with the cloud option where you no longer need to be concerned about whether your data center actually has the capacity to do that kind of thing, there’s an opportunity with a simple investment of time to make a real big difference in the way the desktop is managed.

Fisher: I totally agree. Thanks, Robin. All right. Let’s shift gears and talk to Dana Gardner. Dana is the president of Interarbor Solutions, and is known for identifying software and enterprise infrastructure trends and new IT business growth opportunities.

During the last 18 years he’s refined his insights as an industry analyst and news editor, and lately he’s been focused on application development and deployment strategies and cloud computing.

So Dana, you’ve been covering us for a while on your blog. For those of you who don’t know, Dana’s blog is called BriefingsDirect. It’s a ZDNet blog. You’ve covered our funding and platform launch, and some of our partner announcements, and we’ve had some time to sit down as well and chat.

In a posting this summer you wrote about Pike County– which is a school district in Kentucky, where IBM has successfully sold a 1,400-seat DaaS deployment. That’s something that we’re going to dive deeper into on a couple of the webinars in the series.

You’ve stated a broad affection for the term “cloud computing,” and all that sticks to that nowadays will mean broad affection, too, for DaaS. Can you elaborate on that?

Entering transitional period

Gardner: Well, sure. As I said, we’re entering a transitional period, where people are really re-thinking how they go about the whole compute and IT resources equation. There’s almost this catalyst effect or the little Dutch boy taking his finger out of the hole in the dike, where the whole thing comes tumbling down.

When you start moving toward virtualization and you start re-thinking about infrastructure, you start re-thinking the relationship between hardware and software. You start re-thinking the relationship between tools and the deployment platform, as you elevate the virtualization and isolate applications away from the platform, and you start re-thinking about delivery.

If you take the step toward terminal services and delivering some applications across the wire from a server-based host, that continues to tip this a little bit toward, “Okay, if I could do it with a couple of apps, why not look at more? If I could do it with apps, why not with desktop? If I can do it with one desktop, why not with a mobile tier?”

If I’m doing some web apps, and I have traditional client-server apps and I want to integrate them, isn’t it better to integrate them in the back-end and then deliver them in a common method out to the client side?

So we’re really going through this period of transformation, and I think that virtualization has been a catalyst to VDI and that VDI is therefore a catalyst into cloud. If you can do it through your servers, somebody else can do it through theirs.

If we’ve managed the wide-area network issues, if we have performance that’s acceptable at most of the application performance criteria for the bell curve of users, the productivity workers, we just go down this domino line of one effect after another.

When we start really seeing total costs tip as a result, the delta between doing it yourself and then doing it through some of these newer approaches is just super-compelling. Now that we’re entering into an economic period, where we’re challenged with top-line and bottom-line growth, people are not going to take baby steps. They’re going to be looking for transformative, real game-changing types of steps. If you can identify a class of users and use that as a pilot, if you can find the right partners for the hosting and perhaps even a larger value-added services portfolio approach, you start gaining the trust, you start seeing that you can do IT at some level but others can do it even better.

The cloud providers are in the business of reducing their costs, increasing their utilization, exploiting the newer technologies, and building data centers primarily with a focus on this level of virtualization and delivery of services at scale with performance criteria. Then, it really becomes psychology and we’re looking at, as you said earlier, the trust level about where to keep your data and that’s really all that’s preventing us now from moving quite rapidly into some of these newer paradigms.

The cost makes sense. The technology makes sense. It’s really now an issue of trust, and so it’s not going to happen overnight, but with baby steps, the domino effect, as you work toward VDI internally. If you work towards cloud with a couple of apps, certain classes of users, before long that whole dike is coming down and you might see only a minority of your workers actually doing things in the conventional client-server, full PC local run-time and data storage mode.

I think we’re really just now entering into a fairly transformative period, but it’s psychologically gaining ground rapidly.

Fisher: Yes, definitely. Rachel, Robin, any thoughts on Dana’s comments?

Psychological issues

Chalmers: I think that’s exactly right and I think the psychological issues are really important, as Dana has described them. One of the huge barriers to adoption of earlier models of this kind of remote desktop-like terminal services has been just that they’re different from having a full, rich Windows user experience in front of you.

The example people keep returning to is the ability to have a picture of your kids as your desktop wallpaper. It seems so trivial from an IT point of view, but just the ability to personalize your own environment in that way turned out to be a major obstacle to adoption of the presentation servers in that model.

You can do that in a virtual desktop environment. You can serve that exact same desktop environment to the same employee, whether she’s working from San Francisco or London. Because the VDI deployment model also has the same, yet better, features to that employee, it becomes much easier to persuade organizations to adopt this model and the cost savings that come along with it.

So, we underplay the psychological aspects at our peril. People are human beings and they have human foibles, and technology needs to work around that rather than assuming that it doesn’t exist.

Bloor: Yes, I’d go along with that. What you’ve actually got here is a technology where the ultimate user won’t necessarily know whether they’ve got a local PC. Nowadays, you can buy devices where the PC itself is buried in the screen.

So, it’s like they may psychologically, in one way or another, have some kind of feeling of ownership for their environment, but if they get the same environment virtually that they would adopt physically, they’re not going to object. Certainly some of the earlier experiences that users have had is that problems go away. The number of desk-side visits required for support, when all you’ve got is a thin client device on the desktop, diminishes dramatically.

The user suddenly has a responsibility for various things that they would do within their own environment lifted completely from them. So, although you don’t advertise it as this there’s actually a win for the user in this.

Chalmers: That’s exactly right and fewer desktop visits, fewer IT guys coming around to restart your blue screen or desktop, that translates directly into increased productivity.

Fisher: Yes, and what we like to talk about at Desktone is just this notion of anytime, anywhere. It’s one thing to get certain limited apps and services. It’s another thing to be able to get your PC environment, your corporate persona everywhere you go.

If you need to work from home for a couple of days a week, or in emergency situations, it’s great to be able to have that level of mobility and flexibility. So, we totally agree.

Now let’s move over now to Robin Bloor. He's a partner at Hurwitz and Associates. He’s got over 20 years experience in IT analysis and consultancy and is an influential and respected commentator on many corporate IT issues. His recent research is focused on virtualization, desktop management, and cloud computing.

Robin, in your post about Desktone on your blog -- “have Mac will blog” -- a title I love -- you mentioned that you were surprised to see the DaaS or cloud value prop for client virtualization emerge this early on. You mentioned that you found our platform architecture diagram to be extremely helpful in explaining the value prop, and I just would like you to provide some more color around those comments.

Tracking virtualization

Bloor: Sure. I really came into this late last year and, in one way or another, I was looking at the various things that were happening in terms of virtualization. I’d been tracking the escalating power of PC CPUs and the fact that, by and large, in a lot of environments the PC is hard to use.

If you do an analysis of what is happening in terms of CPU usage, then the most active thing that happens on a PC is that somebody waves their mouse around or possibly somebody is running video, in which case the CPU is very active. But it became obvious that you could put a virtualized environment on a PC.

When I realized that people were doing that, I got interested in the way that people were actually doing it and, there are a lot of things out there, if you actually look at it. It absolutely stunned me that a cloud offering became available earlier this year because that meant that somebody would actually have had to have been thinking about this two years ago in order to put together the technology that would enable such an offering like that.

So just look at the diagram and you certainly see why, from the corporate point of view, if you’re somebody that’s running a thousand desktops or more it’s a problem. It is a problem in terms of an awful lot of things but mostly it’s a support issue and it’s a management issue. When you get an implementation that involves changing the desktop from a PC to a thin client and you don’t put anything into the data center, it improves.

You’ve now got a situation where you don’t need cages in the data center running PC blades or running virtualized blades to actually provide the service. You don’t need to implement the networking stuff, the brokering capability, boost the networking in case it’s clashing with anything else, or re-engineer networks.

All you do is you go straight into the cloud and you have control of the cloud from the cloud. It’s not going to be completely pain free obviously, but it’s a fairly pain-free implementation. If I were in the situation of making a buying decision right now, I would investigate this very, very closely before deciding against it, because this has got to be the least disruptive solution. And if the apparent cost of ownership turns out to be the same or less than any other solution, you’re going to take it very seriously.

Fisher: Absolutely. Rachel, Dana, thoughts and comments?

Chalmers: I agree and I love this diagram. It’s the one that really conveyed to me how cloud-hosted desktop virtualization might work, and what the value prop is to the IT department, because they get to keep all the stuff they care about, all the user data, all the authentication authorization, all of the business apps. All they push out is support for those desktops, which frankly had been pushed out anyway.

There’s always one guy or gal in the IT organization who is hiking around from desktop to desktop installing antivirus or rebooting machines. Now, instead of that person being hiking around the offices, they are employed by the service provider and sitting in a comfy chair and being ergonomically correct.

Rational architecture

Gardner: Yes, I would say that this is a much more lucid and rational architecture. We’ve found ourselves, over the past 15 or 20 years, sort of the victim of a disjointed market roll out. We really didn’t anticipate the role of the Internet, when client-server came about. Client-server came about quickly just after local area networks (LANs) were established.

We really hadn’t even rationalized how a LAN should work properly, before we were off and running into bringing browsers in TCP/IP stacks. So, in a sense, we've been tripping over and bouncing around from one very rapid shift in technology to another. I think we’re finally starting to think back and say, “Okay, what’s the real rational, proper architectural approach to this?”

We recognize that it’s not just going to be a PC on every desktop. It’s going to be a broadband Internet connection in every coat pocket, regardless of where you are. That fundamentally changes things. We’re still catching up to that shift.

When I look at a diagram like Desktone’s, I say, “Ah-ha!” You know now that we fairly well understand the major shifts that have occurred in the past 20-25 years. If we could start from a real computer-science perspective, if we could look at it rationally from a business and cost perspective, how would we properly architect how we deploy and distribute IT resources? We’re really starting to get to a much more sensible approach, and that’s important.

Bloor: Yes, I would completely go with Dana on that. From an architect’s point of view, if nobody had influenced you in any way and you were just asked to draw out a sense of a virtualization of services to end users, you would probably head in this direction. I have no doubt about it. I’ve been an architect in my time, and it’s just very appealing. It looks like what Desktone DaaS has here is resources under control, and we’ve never had that with a PC.

Fisher: Well, that was great, and I really appreciate you guys taking the time to answer my questions. With the remaining 10 minutes, I’d like to turn it over for some Q&A.

The question coming in has to do with server-based computing app delivery with respect to this model.

This is something that comes up all the time. People say, “We’re currently using terminal services or presentation server,” which is obviously what we use for app deployment. How does that application deployment model fit into this world? And to kick off the discussion I’ll tell you that at Desktone we view what we’re doing very much as the virtualization of the underlying environment of the actual PC itself and the core OS.

That doesn’t change the fact that there are still going to be numerous ways to deploy applications. There’s local installation. There’s local app virtualization. There’s the streaming piece of app virtualization. And, of course, there’s server-based computing which is, by far, the most widely used form of virtualized application delivery.

And, not to mention the fact that in our model there is a private LAN connection between the enterprise and the service provider. In some cases, the latency around that connection is going to warrant having particularly chatty applications still hosted back in the enterprise data center on either Citrix and/or Microsoft terminal servers. So, I don’t view this as being really a solution that cannibalizes traditional server-based computing. What do you guys think?

Chalmers: I think that’s exactly accurate. You mentioned right at the front of the call that Citrix is an investor in Desktone. Clearly the VDI model itself is one that extends the application of terminal services from traditional task workers to all knowledge workers –those people who are invested in having a picture of their kids on the desktop wallpaper.

I think cloud-hosted desktop virtualization extends that again, so that, for example, if you’re running a very successful terminal services application and you don’t want to rip that out—very sensible, because ripping and replacing is much more expensive than just maintaining a legacy deployment of something like that—you can drop in XenDesktop. XenDesktop can talk quite happily to what is now XenApp, the presentation server deployment.

It can talk quite happily to a Desktone back-end and have all of its VDI virtual desktops hosted on a hosting provider. If you’ve got a desk full of Wall Street traders, it can also connect them up to blade PCs, dedicated resources that are running inside the data center.

So XenDesktop is an example of the kinds of desktop connection brokers you’re going to see, but as happy supporting traditional server-based computing or the blade PC model as they are being the front end for a true VDI deployment.

Bloor: Yes, I’d go along with that. One of the things that’s interesting in this space is that there are a number of server-based computing implementations that have been, what I’ll call, early attempts to virtualize the PC, and you may get adrift from some of those implementations. I know in certain banks that they did this purely for security reasons.

You know the virtualized PC is aa secure a server as a computer is. So you may get some drift from one kind of implementation to another, but in general, what’s going to happen is that the virtual PC is just the same as a physical PC. So, you just continue to do what you did before.

Fisher: Absolutely. I do agree that there definitely will be a shift and that again – back to the use cases -- people are going to have to say, “Okay, here are the four reasons we did server-based computing,” not, “We did server-based computing because we thought it was cool.”

Maybe in the area of security, as Robin mentioned, or some other areas, those reasons for deployment go away. But certainly, dealing with latency over LAN, depending on where the enterprise data center sits, where the user sits and where the hosting provider sits, there very well may still be a compelling need to use server-based computing.

Okay. We’ve got about five minutes left. There was an interesting question about disaster recovery (DR), using cloud-hosted desktops as DR for VDI, and this is a subject that’s close to my heart. It will be interesting to hear what you guys have to say about it. There is actually already the notion of some of our service-provider partners looking at providing desktop disaster recovery as a service. It’s almost like a baby step to full-blown cloud-hosted desktops.

Maybe you don’t feel comfortable having your users’ primary desktop hosted in the cloud, but what about a disaster recovery instance in case their PC blue screens and is not recoverable and they’re in some kind of time-critical role and they need to get back and up and running.

Or, as is probably more commonly thought of, what if they’re the victims of some sort of natural disaster and need to get access to an instance of the corporate desktop. What do you guys think about that concept?

Bloor: There are going to be a number of instances where people just go to this, particularly banks where, because of the kind of regulatory or even local standards they operate, they have to have a completely dual capability. It’s a lot easier to have dual capability if you’re going virtual, and I’m not sure that you would necessarily have the disaster recovery service virtual and the real service physical. You might have them both virtual, because you can do that.

This is just a matter of buying capacity, and the disaster-recovery stuff is only required at the point in time where you actually have the disaster. So, it’s got to be less expensive. Certainly, when you’re thinking about configuration in changed management for those environments, when you’ve got specifically completely dual environments, this makes the problem a lot easier.

Gardner: I think there are literally dozens of different security and risk-avoidance benefits to this model. There’s the business continuity issue, the fact that cloud providers will have redundancy across data centers, across geographies, the fact that there is also intellectual property risk management where you can control what property is distributed and how, and it keeps it centrally managed and check-in and check-out can be rigorously managed. And, then there’s also an audit trail as to who is there, so there’s compliance and regulatory benefits.

There’s also control over access to privileges, so that when someone changes a job it’s much easier to track what applications they would and wouldn’t get, in that you’ve basically re-factored their desktop from scratch that next day that they start the new job. So, the risk-compliance and avoidance issues are huge here, and for those types of companies or public organizations where that risk and avoidance issue is huge, we’ll see more of this.

I think that the Department of Defense and some of the intelligence communities have already moved very rapidly towards all server-side control and, for the same reasons that would make sense for a lot of businesses too.

Chalmers: Disaster recovery is always top of mind this time of year, because the hurricanes come around just in time for the new financial round of budgeting. But really it’s a no-brainer for a small business. For the companies that I talked to that are only running one data center, the only thing that they’re looking at the cloud for right now is disaster recovery, and that applies as much to their desktop resources as to their server resources.

Fisher: Great. Well, we are just about out of time so I want to close out. First, lots of information about what we’re doing at Desktone is up on our website, including an analyst coverage page under the News and Events section where you can find more information about Robin and Dana and Rachel’s thinking and – as well as other analysts.

We do maintain a blog at We have a number of webinars up and coming to round out the series. We’ll be talking to Pike County, a customer of IBM’s and a user of the Desktone DaaS solution, and we’ll be speaking with our partner IBM. We’ll also have the opportunity to have Paul Gaffney, our COO, on a couple of our webinars as well.

So with that I will thank our terrific panel. Rachel, Dana and Robin, thank you so much for joining and for a fantastic conversation on the subject, and thank you so much everyone out there for attending.

View the webinar here.

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.

Tuesday, September 02, 2008

Interview: HP's Virtualization Services Honcho John Bennett on 'Rethinking Virtualization'

Transcript of BriefingsDirect podcast with Hewlett-Packard's John Bennett on virtualization and its role in the enterprise.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast interview about rethinking virtualization. [See news from HP on virtualization, panel discussion, IDC white paper.]

Virtualization in information technology (IT) has become a very hot topic in the last several years, and we're approaching a tipping point in the market, where virtualization's adoption and acceptance is really rampant, and it's offering some significant benefits in terms of cost and performance.

So, we're going to talk about rethinking multiple tiers of virtualization for applications, infrastructure, desktop, and even some other types of uses.

We're also going to look at this through the lens of a contextual approach to virtualization, not simply a tactical standalone benefit, but in the context of larger IT transformation activities. These include application modernization, data center consolidation, next generation datacenter (NGDC) architectures, energy considerations, and of course, trying to reduce the total cost of IT as a percent of revenues for large organizations.

To help us sort through these issues of context and strategy for virtualization, as well as to look at a number of initiatives that Hewlett-Packard (HP) is now embarking upon, we're joined by John Bennett. John is the worldwide director of data center transformation solutions, and also the HP's Technology Solutions Group (TSG) lead for virtualization. Welcome to the show, John.

John Bennett: Thank you very much, Dana. It's a pleasure to be with you today.

Gardner: When we talk about virtualization as a red-hot trend, just how big a deal is virtualization in the IT market right now?

Bennett: Virtualization is certainly one of the major technology-oriented trends that we see in the industry right now, although I'm always reminded that virtualization isn't new. It's been available as a technology going back 30-40 years.

We see a great deal of excitement, especially around server virtualization, but it's being extended to many other areas as well. We see large numbers of customers, certainly well over half, who have actively deployed virtualization projects. We seem to be at a tipping point in terms of everyone doing it, wanting to do it, or wanting to do even more.

Gardner: Are they doing this on a piecemeal basis, on a tactical basis? Is it being done properly in the methodological framework across the board? What sort of a market trend are we looking at in terms of adoption pattern?

Bennett: In terms of adoption patterns, especially for x86 server virtualization, we see virtualization being driven more as tactical or specific types of IT projects. It's not uncommon to see customers starting out, either to just reduce costs, to improve the efficiency in utilization of the assets they have, or using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But, it's very much focused around IT projects and IT benefits.

The interesting thing is that as customers get engaged in these projects, their eyes start to open up in terms of what else they can do with virtualization. For customers who've already done some virtualization work, they realize their interesting manageability and flexibility options for IT. "I can provision servers or server assets more quickly. I can be a little more responsive to the needs of the business. I can do things a little more quickly than I could before." And, those clearly have benefits to IT with the right value to the business.

Then, they start to see that there are interesting benefits around availability, being able to reduce or eliminate planned downtime, and also to respond much more quickly and expeditiously to unplanned downtime. That then lends itself to the conversation around disaster recovery, and into business continuity, if not continuous computing and disaster tolerance.

It's a very interesting evolution of things with increasing value to the business, but it's very much stepwise, and today tends to be focused around IT benefits. We think that's kind of missing the opportunity.

Gardner: So we've been on this evolution. As you say, virtual machines, hypervisors, this approach to isolating activity at a level above the actual metal, the binaries, has been around for some time. Why is this catching on now? Is it not just economics, and, if you're talking about a business outcome, why are they important and why is virtualization being applied to them now?

Bennett: It really did start with economics, but the real business value to virtualization comes in many other areas that are much more critically important to the business.

One of the first is having an IT organization that is able to respond to dynamically changing needs in real-time, increasing demands for particular applications or business services, being able to throw additional capacity very quickly where it's needed, whether that's driven by seasonal factors or whether it's driven by just systemic growth in the business.

We see people looking at virtualization to improve the organization's ability to roll out new applications in business services much more quickly. We also see that they're gaining some real value in terms of agility and flexibility in having an IT organization that can be highly responsive to whatever is going on in the business, short term and long term.

We also see, as I highlighted earlier, that it really does connect into business continuity, which we see in many of the market research surveys we do year after year. It continues to be a top-of-mind concern for CEOs and CIOs alike.

Gardner: Perhaps we're at this point in time where IT has become so essential to so many aspects of how businesses operate -- the ability of making IT dynamic and responsive, of having redundancy and failover and many of the mission critical aspects that we expect of perhaps certain transactional systems -- we're now able to extend almost anything we do with IT, if it's being supported to a virtualized environment.

Bennett: Well, it's actually being supported through any environment, and it's why we at HP have such a strong focus on business technology. There are very few modern enterprises, whether they're private or public entities, that really could address their mission and business goals without IT.

It is just a completely fundamental fabric of the business today. And, having that environment be responsive, protected, reliable, and delivering quality of service are key attributes of that environment.

This is why we see, next-generation data centers and adaptive infrastructure from HP as being key to that, and it's why we speak about the idea of data center transformation. If that's the IT environment you want, including virtualization, how do you get there from wherever you are at?

Gardner: I suppose it's also important to point out that we're not just talking about virtualization, but we're also talking about mixtures, where there are going to be plenty of physical infrastructure and technology in place, but increasingly virtualized instances here and there. I suppose it's managing these together that is the most important discussion at this point in time.

Bennett: It certainly is one of the more topical points right now. What we see is that customers start to deploy virtualization more broadly, and, as they want to run more and more of their applications in virtualized environments, two challenges arise.

One of them is want of diversity. Customers are accustomed to diversity of the infrastructure, but now we have diversity with the virtualization base. The virtual machines that they're using, the number of suppliers involved in it, and diversity brings complexity, and complexity brings increased risk usually.

One aspect of that diversity is controlling management. You have different virtual machines, each with their own management tools and paradigms. How do you manage from an applications, quality of service, or service-level agreement point of view, across physical and virtual resources and infrastructure that are being used to deliver those services to the business? And, how do you deal with managing physical infrastructure from different manufacturers, and virtual resources from different manufacturers.

Another complication that comes up from a control and management point of view, as customers strive to use virtualization more pervasively in the data center, is that you have to deal with the skill sets of the people you have in the IT organization, as well as the resources available to you to help implement these projects.

One of the virtues of looking at virtualization more comprehensively is that you're actually able to free up resources to focus more on business services and business priorities and less on management and maintenance.

If you look at virtualization more strategically, you say it's not just the servers, but it's my storage and network environment around them. It's my management tools and processes. It's how I do everything together.

When I look at it comprehensively, I not only have a very clean set of controls and procedures in place for running and managing the data center, but now I have the opportunity to start making significant shifts in resources, away from management and maintenance and into business priorities and growth.

Gardner: So we are faced with a potential tipping point, but I suppose that also brings about a new level of risk, because you're moving from a tactical implementation into a variety of implementations, at the application, the infrastructure, and the server levels. Increasingly we're seeing interest in desktop virtualization, but we're also seeing a mixture of suppliers and technologies, and we're also seeing this in the context of other initiatives, with the goal being transformation.

It seems that if you don't do this all properly with some sort of a framework, or at least a conscious approach of managing this from beginning to end in a lifecycle mentality, there could be some serious pitfalls. You could actually stumble and subvert those benefits that you're looking to enjoy.

Bennett: Yes, we see both pitfalls, i.e., problems that arise from not taking a comprehensive approach, and we see missed opportunities, which is probably the bigger loss for an organization. They could see what the potential of virtualization was, but they weren't able to realize it, because their implementation path didn't take into account everything they had to in order to be successful.

This is where we introduce the idea of rethinking virtualization, and we describe it as rethinking virtualization in business terms. It means looking at maximizing your business impact first by taking a business view of virtualization. Then, it maximizes the IT impact by taking a comprehensive view of virtualization in the data center. Then, it maximizes the value to the organization by leveraging virtualization for client implementations, where it makes sense.

But, it's always driven from a business perspective -- what is the benefit to the business, both quantitative and qualitative -- and then drilling down. It's like unpeeling an onion, if I can borrow the analogy from the "Shrek" movie. You go from, "Okay, I have this business service. This business service is delivered through virtual and physical resources, which means I need management in control and governance of both physical and virtual resources."

And then, underneath that I want to be able to go from insight and control, into management and execution. I want to be able to drill down from the business processes and the business service management and automation tools into the infrastructure management, which in turn drills down into the infrastructure itself.

Is the infrastructure designed to be run and operated in a virtualized environment? Is it designed to be managed from an energy control point of view for example? Is it designed to be able to move virtual resources from one physical server to another, without requiring an army of people?

So, part of the onus is on HP in this case to make sure that we're integrating and implementing support for virtualization into all of the components in the data center, so that it works and we can take advantage of it. But, it's up to the customer also to take this business and data center view of virtualization and look at it from an integration point of view.

If you do virtualization as point projects, what we've seen is that you end up with management tools and processes that are outside of the domain of the historical investments you've made, whether it's an IT service management in Information Technology Infrastructure Library (ITIL), or in business service management.

We see virtual environments that are disconnected from the insight and controls and governance and policy procedures put in for IT. This means that if something happens at a business-services level, you don't quite know how to go about fixing it, because you can't locate it. That's why you really want to take this integrated view from a business-service's point of view, from an infrastructure and infrastructure management point of view, and also in terms of your client architectures.

Gardner: Now, as we are rethinking infrastructure, and in the context of virtualization, we also are looking for these business outcomes. Are we at the point yet where the business leaders are saying, "We need virtualization?" Have they connected the dots yet, or do they just know what they want from business outcomes and really don't care whether this virtualization gets there or not?

Bennett: From a business leader's point of view, they don't care about virtualization. Whether it's a CEO, a line of business manager, even a CFO their focus is on, "What are the business priorities? What is our strategy for this business or this organization? Are we going to be trying to grow the business organically, grow it through acquisitions, are we going to be driving a lot of product or service innovation? I need an organization that is going to be responsive to rolling those out."

And, of course, there is always the pressures to reduce cost and reduce risk that apply throughout the business, including to IT. They will not tell you that virtualization is what you have to do if you're in IT.

IT wants to deliver these kinds of benefits, to be able to do things quickly, to be able to dynamically put resources where they're needed, mitigate the risks in the data center environment, whether the risk is related to power and cooling, to the capacity of individual server and its ability to support a particular application, or the risk associated with people and processes that can cause downtime.

Those are IT projects, and for IT, virtualization is a fascinating technology, which allows them to address multiple sets of data center issues and provide the benefits that the business is looking for. It's revolutionary in that sense. It's pretty cool.

Gardner: This is not another "silver bullet," is it? We're really talking about something that's fundamental and that is transformative.

Bennett: Oh, absolutely. We believe that virtualization is a very important attribute of an NGDC. It's been an instrumental part of our adaptive infrastructure, which defines our view of an NGDC for a quite a while, and we see virtualization projects as core to successful transformational initiatives as well.

Gardner: Nowadays, and actually for several years, incremental improvements in IT don't get the funding or the attention. We really need some dramatic improvements in order to get the investment and the move through the inertia. Even on the tactical level, what sort of benefits are some organizations that you're familiar with enjoying and getting returns on from their virtualization activities?

Bennett: It really depends on whether you're looking at it from a business point of view or from an IT point of view.

Gardner: Let's look at it both ways.

Bennett: From an IT point of view, it's clear that they can decrease capital costs, and they can decrease operating expenditure (OPEX) costs associated with depreciation of assets by getting much better utilization of the assets they have. They can either get rid of excess equipment or, as they do modernization projects, they can acquire less infrastructure to run the environment, when they have it effectively virtualized.

When they blend it with integrated management, they can manage the physical and virtual resources together and build an IT environment that really supports the dynamics of virtualization.

They can lower the cost of IT operations implicitly by reducing the complexity of it and explicitly by having very standardized and simple procedures covering virtual and physical resources, which in conjunction with the other cost savings, frees up people to work on other projects and activities. Those all also contribute to reduce costs for the business, although they are secondary effects in many cases.

We see customers being able to improve the quality of service. They're able to virtually eliminate unplanned downtime, especially where it's associated with the base hardware or with the operating environments themselves. They're able to reduce unplanned downtime, because if you have an incident, you are not stuck to a particular server and trying to get it back up and running. You can restart the image on another device, on another virtual machine, restore those services, and then you have the time to deal with the diagnosis and repair at your convenience. It's a much saner environment for IT.

Gardner: Do you have any examples of companies that have moved through this sufficiently that they can look back and determine a return on investment (ROI) or total cost of ownership (TCO) benefit, and what sort of metrics are we seeing in those cases?

Bennett: Certainly, we are looking to do ROI types of benefits. I must confess I don't have any that are explicitly quantified, but we do have customers able to articulate some of the tangible and intangible benefits. One good example is Mitel Corporation, which went through a project of infrastructure modernization, but especially virtualization, in order to address the business needs. They had to both reduce costs and be more responsive to the business.

They were able to drive about $300,000 annually out of their IT budget. That's a significant amount, because they are organized by individual business units there. I love the quote from the datacenter manager with regard to the relationship with the business, "We can now just say, yes."

So it addresses that flexibility and agility type of question. If you get engaged in larger transformational projects, where virtualization is the key element, we have Alcatel-Lucent, which is expecting to reduce their IT operational cost by 25 percent from virtualization and other transformational projects.

In the case of HP IT, we actually have reduced our operational costs by 50 percent, and virtualization is very much a key factor in being able to do that. It can't take full credit, of course, because it was part of a larger set of transformational projects. But, it was absolutely critical to both lowering costs, improving quality of service, improving business continuity, and especially helping the organization to be much more flexible and agile to meeting changing needs.

Gardner: And, as you pointed out earlier, we are able to shift the ratio from ongoing maintenance and support costs into the ability for innovation, new systems, new approaches, investments, and productivity.

Bennett: Absolutely. We see a large number of customers spending less than 30 percent of their IT budget on business priorities, and growth initiatives, and 70 percent or more on management and maintenance. With virtualization and with these broader transformational initiative, you can really flip the ratio around. HP has gone to, I think, 80-20, and I know that that's an area that Alcatel-Lucent has also focused on changing substantively.

Gardner: When you say 80-20, you mean 80 percent for new initiatives?

Bennett: Yes, 80 percent for new initiatives in business priorities, and 20 percent on management and maintenance.

Gardner: That is significant.

Bennett: Yeah.

Gardner: Well, obviously HP has been rethinking virtualization. Of course, it has been rethinking infrastructure as well for some time, given its NGDC activities and some of the things that it has done internally in terms of reducing the number of data centers, and reducing the number of applications. More than that, HP has new go-to-market initiatives in its Sept. 2 announcements.

Can you run through some important aspects of these announcements, and tell us which ones will help people understand the rethinking of virtualization, the strategic approach to virtualization, and also the business outcomes that they should be enjoying from virtualization?

Bennett: Certainly. The first thing that I would like to highlight is that all of the products and services that we are announcing reflect the fact that we are not just encouraging customers to rethink virtualization, but that we at HP have as well. In particular, we realize that it's critically important that the products and hardware, software, and services that we provide need to embrace the virtual and the physical world together.

The customers are going to be able to implement this successfully. They need the expertise. They need the products that will actually let them do it, and that's a lot of what this set of announcements in September is about.

It starts with a new HP ProLiant BL495c virtualization blade, which is really designed and optimized for the virtualization environment. What we have seen impacting the ability of servers to effectively increase the number of virtual machines they can support or support growth of virtual machines is not so much CPU power as it is memory, network bandwidth, and connectivity.

So, we have doubled the memory capacity of the environment, and we have increased the number of network connections that are possible on a single blade, and that will provide much more headroom for these kind of customer's environments at the infrastructure level.

At the business service management level, what we are introducing is a number of enhancements to the HP software portfolio for business service management and automation. The tools that we provide in what is today the industry's leading portfolio for business service management and automation work with both physical resources for insight and control and management and governance purposes, and also with virtual resources, supporting applications and services being delivered through virtual machines provided by VMware, Citrix, or Microsoft.

This is the first wave of announcements from HP software, basically building in integrated and comprehensive support for the virtual environment, as well as the physical environment. That's complemented by new services capabilities. We recognized that not everyone wants to have custom service projects, or expertise helping them with virtualization.

We have some new services that are much more tactically and specifically oriented. They very clearly articulate what the outcomes of the project are, what the time frames are, and also what the costs project are.

We're augmenting our capabilities. I think we are the leading platform provider for all of the key virtualization vendors out there. We are also the leading training vendor for virtualization and we are announcing new offerings in both of those portfolios -- for support and for education services around these virtual environments and virtual capabilities.

Integrated support is really key. When customers experience difficulties in their data centers with a business service or application running in the environment, they don't want finger pointing across vendors. Since we are able to support the virtual machine software, as well as our operating environments, including Microsoft, and of course, the HP servers underneath them, we can provide an integrated approach to dealing with corrective issues and get them fixed on the customer's behalf.

On the desktop side, we have had a portfolio of virtual desktop infrastructure (VDI) services in place already for VMware. We are announcing a new set of capabilities there for Citrix XenDesktop, both for products and for services for client virtualization. Just as important, the work we are doing in those offerings also lays the foundation for supporting Microsoft's Hyper-V when that becomes available in the marketplace as well.

In addition to those capabilities, we have a new storage offering. If you look at the architecture of the data center, you clearly need to move away from direct-attached storage, and move to network shared storage.

We have a new product that integrates our enterprise virtual array, which is a leading self optimizing storage solution for virtualized environments, with PolyServe NAS, which augments the virtualized environment with a clustered file system, making it easy for customers to move to a network attached or a shared storage model, as they make virtualization a more foundational technology in the data center.

Then, in addition to the investments in the data center environment itself, we are announcing a new family of thin clients, some new blade workstations, which articulates the point of that, when it comes to client virtualization, it's really key to have a portfolio of desktop choices, so that customers can get the right solution on the right desktop.

In many cases. it might be thin clients, but in other cases, it might be blade PCs or workstations, depending on the end user's needs. We support all of those, and we support them in different kind of environments.

We also recognize that even if thin clients meet most of the functional needs, people sometimes still want to have either a strong multimedia or 3D performance on some of these desktops. We're announcing a new remote-graphic software offering, which will allow customers to provide a rich multimedia or 3D experience, even to a client environment not equipped with that hardware.

This is the first wave, if you will, of announcements that we are making. It builds on some announcements we made last March, especially with Insight Dynamics VSE for infrastructure management, as well as the blades announcements with an Integrated Lights-Out (ILO) capabilities that we announced a year or so ago. So, we're continuing to build out this portfolio to make this real for customers, and provide the foundation for them to really exploit virtualization for their business benefits.

Gardner: You mentioned VDI, and for those folks who might not be too familiar with desktop virtualization, what we are talking about is bringing back onto the servers the whole presentation of the entire desktop, not just an application or two, but the entire experience.

Therefore, every time a user starts up a client device, they are actually getting a fresh new instance of the operating system, which means it could be updated, it could be patched, it could be serviced entirely without the impacting the client device. There are a number of other benefits to desktop virtualization. Is there anything I've missed in terms of those people who are just getting their feet wet around desktop virtualization?

Bennett: Well, the real driver for desktop virtualization initiatives that we see are organizational concerns around management and security of the client environment. You articulated that what they get is a nice, clean desktop environment whenever they start up the PC. What you didn't say was that it's not uncommon for end users to visit sites they shouldn't have, or open mail they shouldn't have, and get their environment infected by spyware, malware, viruses or anything else.

Client virtualization solutions can really give you a strong handle on management and security, and reducing the cost of both, while increasing the control of both. Also, in environments where customers have a lot of knowledge workers, one goal for corporate risk protection is the protection of end-user data that's on the desktop. In a client virtualization environment, you are able to do much better control of protecting end user data. And/or by the way, if the end users move around, either to different offices or different locations, they still have access to their data no matter where they connect from.

Gardner: So, when we look at virtualization in this larger context, we're seeing that the applications can be virtualized. It's bringing everything back into a server and datacenter infrastructure, but that has a lot of benefits in terms of control, manageability, doing things on a productive level of utilization and continuity.

It's almost going back to the future. Are we, in a sense, enjoying the best of the era 30 or 40 years ago around mainframes and control, with the best of the latest iterations of IT around flexibility, applications and services, Internet, and browser activities? Am I overstating it by saying we get the best of the older and the newer aspects of IT, now that we are doing this contextually?

Bennett: I'm glad that you stated this as combining the best of both worlds. While the world we had 30-40 years ago with mainframes, and indeed with mini computers, was still one of the centralized control of environments that were not necessarily responsive to changing needs of the business, or individual departmental of business unit needs.

That clearly is seen as a great attribute of the modern data center and modern IT environment, but it was an environment that was able to really control, manage, and secure that environment in all the areas that that managed. So, yes, we're combining the two, having the agility and flexibility that people want, having the control and discipline that people want, and also providing access to the innovation that's taking place in the outside world.

We bring them the best of all these worlds, but if a customer is going to realize this, they are not going to get those benefits just by doing server virtualization projects.

Gardner: So, we've seen how virtualization does have an economic benefit. It can bring control, security, and manageability back into a managed approach, a professional approach, for the IT people. At the same time, it's providing some of these business outcomes -- agility, flexibility and responsiveness -- that are so important now in a global economy, and in a fast moving marketplace. Of course, as you mentioned, this is a wave of announcements on Sept. 2, but there are much more to come in a not to distant future.

Bennett: Oh, yes. You will see us continuing to do enhancements and innovation in the infrastructure, server storage, networking, and the input-output fabrics that link them all together. You will see us continuing to innovate and drive more capability and value to the people, whether it's in the support or education side, or in the project and strategy side. You will see us continuing to invest in enhancements in the software portfolio to really provide a comprehensive view of everything going on in the data center.

We continue to be a leading innovator on the client side, both in the devices that sit on the desktop themselves -- whether they are standalone or client virtualization -- and with the software and tools that make client virtualization work. This is just really the first wave of what are the pretty serious investment area for HP.

Gardner: And, clearly has the opportunity to accommodate a lot of the needs of IT, but still give them the opportunity to do that almost important and impossible task, which is to do more for less.

Bennett: We all have that task, and the challenge is how you crack that nut. When we talked about data center transformation last spring, we introduced the concept of “Spend to Save, to Spend to Grow.” The key is finding a way to bootstrap yourself into this, and that's why we look at these things not from a fork lift perspective, because nobody is going to do that frankly, but rather what are the kind of projects you can undertake, and then link the projects together over time for transformational purposes, and also realize benefits from them. So, it becomes self-funding after a while.

An example that I like to use for that is that consolidation is a best practice in data centers today. It's a way of life, but if you really want to significantly change the outcome of some consolidation, which can be substantial, it's worthwhile investing in a virtualization initiative, because when you do that you can consolidate to even less infrastructure.

But before you invest in virtualization, or after you have done it, you might look at investing in an applications modernization project, because the more applications that can be virtualized, the more you can virtualize, the more you consolidate.

So you get savings from the individual projects, but you're kind of multiplying the results together overtime, and that's when it gets really interesting for a customer.

Gardner: So, we have these overlaps, these interdependencies that make it complex, and it needs to be thought through in a total contextual framework, but, as you say, the end result is true transformation.

Bennett: Right, and the easiest way to think it through is from a business perspective. If you look at it from the bottom up, there are so many interconnections and possible paths forward that it's easy to get lost in the weeds. And, if you start popping down, you'd say, "What are the business services I am providing? What are the applications I am running for the business? What are the characteristics that we need to have in place for these, from a business perspective?"

Now, what does that mean in terms of what I do in IT and what I do in the data center? It doesn't make sense to virtualize it or not. If not, carve it aside, manage it on its own. If it does, what am I going to do to effectively both implement virtualization, and manage it from a business perspective? There's a much, much smaller pool of applications business services being provided, than there are of servers and storage devices.

Gardner: Well great, I think we will have to leave it there. We have been talking about rethinking virtualization and putting it in the context of business outcomes, as well as IT transformation. We have been discussing this also in the context of a number of new initiatives and announcements that HP has made, and we have been joined by John Bennett. He is the worldwide director of data center transformation solution, and the HP TSG lead for virtualization. Thank you so much, John, it was very interesting and edifying.

Bennett: Thank you very much, Dana, for this opportunity. We think there is so much promise in virtualization, and we think by rethinking it in business terms, one can maximize the potential for your own organization.

Gardner: Great. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast with Hewlett-Packard's John Bennett on virtualization and its role in the enterprise. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

HP Experts Portray IT Transformation Vision, Explain New Wave of Virtualization Products and Services

Transcript of BriefingsDirect podcast with Hewlett-Packard on series of Sept. 2 announcements on enterprise virtualization products and services.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast discussion about the growing and important topic of virtualization -- at multiple levels in IT organizations, that is to say, for applications, infrastructure/servers, as well as for clients and desktops.

We're going to talk about services and products in the marketplace, along with the demand and the economic and business payoffs that virtualization is already bringing to many companies. We expect virtualization technologies and techniques to bring even more productivity gains in the near future.

We're going to be discussing Hewlett-Packard’s (HP's) approach to virtualization and a series of announcements that came out on Sept. 2. [See slide show on announcements. See an accompanying interview with John Bennett, virtualization lead at HP Services. See an IDC white paper on business benefits of virtualization.]

We're here with Greg Banfield, consulting manager for the HP Consulting and Integration (C&I) Group infrastructure practice. Welcome to the show, Greg.

Greg Banfield: Thank you very much.

Gardner: Dionne Morgan also joins us. She is the worldwide marketing manager for HP’s Technology Services Group (TSG). Welcome, Dionne.

Dionne Morgan: Thank you.

Gardner: And we have Tom Norton, worldwide practice lead for Microsoft Services at HP. Hello, Tom.

Tom Norton: Hello.

Gardner: Virtualization, of course, has been with us for quite some time. The technologies of virtual machines and hypervisors have been around for a while, but this is really starting to gain ground for a variety of reasons. In many organizations, there are economic reasons, technology reasons, and business outcomes reasons.

People are finding that getting higher utilization is the only part of the story. We're also finding that virtualization is taking place in the context of larger IT undertakings, be it data center consolidation, application modernization, services oriented architecture (SOA), business continuity, and energy savings, just to name a few.

I want to start out by talking with Greg about "why now?" Why are the market and HP focused on virtualization as such a significant development in the market at this point in time?

Banfield: It comes down to a few things. It comes down to our customers asking what HP has done within our own data centers, and how we have done it, because we have gone through the transformation ourselves as a company and have gained a lot of experience around that. It also comes down to a few things in the economics around cost, cost of labor, cost of machines. The price of machines is going down; power is coming up.

They're looking into getting a better handle on using those servers, and using the access they have, and trying to fully utilize them to make sure that the applications that they serve within their company and with their users are fully utilized, and to take advantage of the new servers and technologies that are coming out today.

Gardner: Now, HP is in a unique position in that it has hardware, services, clients, software infrastructure, software management, and partnerships across multiple providers of virtualization technology. This seems to be almost a ready-made business with IT development in the marketplace. Tell us how HP views this opportunity as a company.

Norton: What’s interesting about virtualization is that, as companies have started to work with virtualization, the easy assumption is that you are really reducing the numbers of servers. But, as you expand your knowledge and your experience with virtualization, you start looking at comprehensive components in your environment or your infrastructure.

You start understanding what storage has to do with virtualization. You look at the impact of networks, when you start doing consolidation in virtualization. You start understanding a little bit more about security, for example.

Also, virtualization, in and of itself, is really allowing you to consolidate the sheer number of servers, but you still have the idea that each of those virtual servers needs to be managed. So, you get a better view about the overall impact of device management, as well as virtual machine management.

HP is unique in that ability to be able to understand it from a client perspective, from a server perspective, and, as I mentioned, storage, software, networks. It’s actually a tremendous opportunity for HP to work with our customers to give them an overall strategy of how all of those components work together to deliver the value they are looking for in virtualization, to look at cost, and, as you mentioned earlier, to look at flexibility, security, disaster recovery, and rapid presentation of application. We are in a unique position in the industry to be able to help our customers address all of those issues, which have an impact on virtualization.

Gardner: Virtualization, of course, has been targeted largely at individual server farms or data centers, but, as we are describing it, it really does impact quite a bit across the board for IT. I'm also wondering what the impact is on the business. Let’s go to Dionne. What are the business outcomes, values, or productivity benefits that virtualization supports and underscores and that help the IT people make the case for this investment?

Morgan: One of the key areas is cost reduction. Virtualization can help with major cost savings, and that can include savings in terms of the amount of hardware they purchase, the amount of floor space that’s utilized, the cost of power and cooling. So, it improves the energy efficiency of the environment, as well as just the cost of managing the overall environment.

A lot of customers look to virtualization to help reduce their cost and optimize how they manage the environment. Also, when you optimize the management of the environment, that can also help you accelerate business growth. In addition to cost reduction, customers are also beginning to see value in having more flexibility and agility to address the business demand.

You have this increased agility or ability to accelerate growth. It also helps to mitigate risk, so it’s helping improve the up-time of the environment. It helps address disaster recovery, and business continuity. In general, you can summarize the business outcomes in three areas: cost reduction, risk mitigation, and accelerated business growth.

Gardner: Virtualization also adds complexity. When you've got multiple instances running on single hardware, or hardware is virtualized, there is a management hurdle. Bringing this into play across both the physical and the virtual infrastructure is also another management hurdle. I wonder if any of our panel could help me understand a little bit more about doing this the right way from a management perspective.

Norton: What’s interesting about this is that when you get into a virtualized environment, there's a need to understand the heartbeat of the virtualized environment and understand what’s going on at the hardware level. As you grow up from there with the virtualized machines, you have to understand how the virtual machines themselves are performing, and then there's got to be an understanding of how the actual applications are performing within that virtual machine.

So, management and virtual machine management, overall a comprehensive approach to management, is critical to being successful. One of the other areas that is addressed through management -- as well talked about a lot -- is virtual machines sprawl.

Organizations have gone into virtualization with the hope of reducing the number of servers that they manage in their environment. They turn around with a reduced number of physical devices, but they actually end up with more servers. The control of virtual machines is less difficult, which is a good thing, because you have more flexibility. It also can become a burden, because you can quickly lose control of the sheer number of servers, and, the work that goes into managing those servers during patches, upgrades, and the security issues that go along with it.

So, virtual machine management is actually the key contributor to all this. When you think in terms of that, you really have to think in terms of both the actual management of the machine, the physical device, and understanding the utilization of a processor, the health of the computer itself, and then understanding the health of the virtual machines that sit up on top of it.

HP has a unique ability, because we've been working with virtualization since the 1990s. We've been working with virtualizing and understanding the physical nature of the devices for years, and our engineering groups now have invested a lot of time in working with our partners -- VMware, Microsoft, and Citrix -- to understand their virtual machine management and how our tools and their tools can work together and become integrated to provide that comprehensive view that is required now to really properly manage virtual machines.

Gardner: And, we're talking about a heterogeneous environment from the start with this. According to analyst reports, some 80 percent of enterprises are using virtualization on multiple platforms, with half using three or more platforms. So, this really becomes a critical management issue from that perspective.

Let’s go to Dionne and talk about what HP is calling a rethinking of infrastructure. We've talked about the paybacks as an economic incentive, an agility incentive. Organizations can use virtualization to support and augment some of their ongoing work towards consolidation, unification, modernization, part of the IT transformation in a long-term trend, but you are thinking that this is also a milestone at this point by rethinking infrastructure. I wonder if you could help us understand what you mean by that.

Morgan: Organizations need to think not only about their servers, their storage, and their network for the virtualization perspective, but to look at this from an integrated perspective and have an integrated management view of the data center. It’s not just about the technology. They also have to think about this in terms of the people, the processes, and the technology.

Tom was describing how we can help manage the physical and the virtual. In addition to that, we also need to look at how we manage the ongoing processes, which are going to be responsible for "operationalizing" the virtual environment. This could include the adoption of key industry best practices and standards.

Some best practices that come to mind, are what come from the Information Technology Infrastructure Library (ITIL), how you actually use these ITIL processes, and how you take it a step further and automate some of those processes. It’s an industry best practice for managing services that you deliver to the business.

It’s very important to look at the technology, both the physical and the virtual, the processes required to manage, the automation of processes to manage the virtual environment, and also the people within your organization, ensuring that they have the right skill sets and the right information to utilize and take advantage of this virtualization investment.

Gardner: Let’s take that point about personnel to Greg. Tell us about what the skill sets are? It sounds like this is a bit different. Is there training and the ability to bring your staff or IT operation staff up to spec on this? Is there a too much of a demand in the field for people and personnel with experience? What’s the outlook for the human resources aspects of virtualization?

Banfield: That’s a great question. One thing we have heard from the consulting side is that people understand, customers understand, and CIOs understand about the cost savings and those types of things.

What they are asking us, when we go and do these things, is "I understand we are going to save money. I understand my server count is going to go down. What I am struggling with is people and the processes. I have many processes to handle within my infrastructure and people, and I need to get them redeployed or re-energized into other things that will actually generate growth for our company, as opposed to just shepherding servers now as administrators or other types of things."

From that perspective again, ITIL, as we just mentioned, is a great tool that we can use in the processes. From HP’s perspective, our consultants have done this many times in house and with other customers. We bring to the table the know-how from doing this before, doing transportation projects that we can help the customer move from where they are today to where they need to be in a virtual perspective.

It's not from the infrastructure so much, although we can do that, but the bigger piece is how do we get where we are today, with our processes and people, to where we need to be from an infrastructure standpoint in a virtualized world.

So, yes, our folks are trained. We have many certified people on ITIL, virtualization, and our partner certification VMware and Microsoft. It’s a great opportunity for our customers to work with HP. We have a wealth of knowledge, both from a training perspective, from practical know-how, from just doing it before.

Gardner: I think we have a sense of the vision here, the promise, and also some of the challenges. So, HP on September 2 came out with a number of announcements, some methodologies. We are looking at virtualization from a strategy perspective, design perspective, the transition and integration basis, and then ongoing improvement and return on investments (ROI). Let’s look at the first two, strategy and design. What we are talking about in terms of the September 2 announcements on virtualization in regard to strategy and design?

Norton: Strategy is becoming even more important. Our customers are very aware, as everyone else is now, that they have many options available to them as far as virtualization, not only from a perspective of what to virtualize in their environment, but also from a number of partners and technology suppliers who have different views or different technologies to support virtualization.

Our customers, from a strategy and design perspective, have looked to us to provide some guidance that says "How can I get an idea of the net effect that virtualization can have in my environment? How can I present that and gain that experience, but at the same time understand my long- term view of where I want to go with virtualization, because there is so much available and there are so many different options? How do I make a logical and sensible first attempt at virtualization, where I can derive some business value quickly, but also match that up against strategy for a long-term vision?"

What we are trying to supply with these new services around virtualization is the idea that we can provide our customers with a strategy and a short-term proof of concept, or short-term, rapid, or accelerator, implementation of virtualization. Whether it's on the desktop side or on the server with Microsoft’s new Hyper-V, to give them that experience, they can have that experience contribute to a long-term vision as far as that long-term infrastructure design.

What we are trying to look for is taking the complexity out of an introduction to virtualization. We're trying to take the complexity out of the long-term vision and planning and give the customers an idea of what their journey looks like, rapidly introduce it, but in the right direction, so they are following their overall vision in gaining their overall business value.

Gardner: It sounds really important to bring all of the numerous aspects of IT that are affected by this onto the same page, under a road map with the same vision, and then get into a lifecycle perspective. Now, once we've got our vision, we have our perspective, and we have got all the people on board, it’s down to brass tacks, and then transition and integration. Greg, what’s in store for the HP community, vis-à-vis, this level of the deployment?

Banfield: Then, we would have our HP services consultant in integration come in and work with the customer. We've gone through the design phase and the strategy phase, and now we work with the customer to take what we've got on paper and get it going. Typically, we do something in a phased approach, because we're talking about some very large projects. As we've talked about for last 20 or 25 minutes here, it’s a complex environment that we're dealing with. We're dealing with multiple vendors, multiple business groups, and multiple applications, everything impacting a different thing.

We have the design, so we actually start going. We have solution architects and project management using best practices, working hand-in-hand with the customer to make sure that, as we go through this, and there’s changes involved, we are on track.

Of course, as you go through these projects, you have to keep going back, as Tom was mentioning, to your original strategy and your original design, and keep checkpoints. Are we still meeting the criteria for the business? Are we still taking what we have learned during the first two phases, its implementation, and the transition integration valid?

We keep reassessing, as in any large project we go through or anyone we would do. You validate against your milestones and checkpoints and then make adjustments as needed.

Gardner: And then, Dionne, as you mentioned earlier, the business outcomes are important, and the improvements in ROI come into play. So, it’s not enough just to deploy and sit back and wait for the benefits with virtualization. This is an ongoing process, very dynamic, changeable. I think one needs to tweak and manage their resources to improve that productivity to get that economic return. Can you tell us little bit more about what HP has in mind for this long-term economic value?

Morgan: Once you actually transition your solution into production, you have to look at the ongoing operations and the continual improvement of those services that you are providing back to the business. In terms of the ongoing operations, you have to continue to assess your people's skills and your operational processes.

HP provides services to assist with its ongoing operation to help to increase the stability of the virtualized environment. That includes everything from education courses, to software, technical support services, and hardware support services. We also have proactive services, which are really focused on that continual improvement phase of the lifecycle.

On a regular basis, we assess what’s happening in the organization from a people, a process, and a technology perspective. We benchmark against what’s happening in the industry, making recommendations on where a customer can actually improve, on some of those processes to improve efficiency, and to improve on the service level they are providing to the business. We also assist with the implementation of some of those process improvements.

If you look at this from a full lifecycle perspective, HP provides services to assist with everything from strategy, to design, to transition, to the ongoing operations and continual improvement.

Gardner: It was mentioned earlier that HP has gone about a good deal of this virtualization transformation itself. It also worked with some leading-edge customers to deploy and refine. Do we have any metrics, do we have any view into what this means in terms of payback? Is this iterative, minor, 10 percent? What kind of payback typically are we starting to see from a well-planned, well-organized, well-implemented virtualization strategy?

Norton: I don’t know if every company is going to be the same as far as what they may desire to achieve. We've had examples of customers. Greg’s group worked with a financial organization through an accelerator service, in other words going through the whole strategy and discovery phase and trying to measure their environment to look at capacity. They have seen reductions to go from 300 to servers in their environment to 30, at least in the sample of servers that have been evaluated.

That’s just one customer’s example, and everyone could potentially be different, but the idea is just the same. You can look at the number of physical devices and go through an analysis that will look at how these applications can be virtualized and what the utilization of the equipment is. You can have a simple reduction in the number of devices.

HP will also, like our own organization, look at the actual application that’s being virtualized. Maybe it’s not just the case of reducing the number of physical devices and having the same number of servers running. True savings come in when you’ve decided to reduce the number of instances of an application that maybe running on servers. You can add this sense of application virtualization.

The classic example in those cases is an organization that may have 200 remote Microsoft Exchange Servers in their environment. They can look at bringing those distributed remote workstation into a data center environment and find cost savings in administration and data protection. But, there’s still a huge expense in how those Exchange Servers are still sitting on virtual machines. So you still have 40 Exchange Servers and you are still managing each one of those.

Another saving gets involved in that too, where you decide, "I am actually going to reduce the shared number of that 40. I may reduce my Exchange Servers from X number of devices to a quarter of that." Then, you still have those devices that reduce the number of Exchange on servers running within that consolidated environment as well, and that dramatically affects that kind of cost saving.

Cost savings vary, but it can be dramatic. It can be as dramatic as CAPEX expenditures in the hardware base and it can be very dramatic from an application-management perspective or a server management perspective.

Organizations now are looking, like HP did, in both areas, reducing the shared number of physical devices in the data center, and reducing the number of instances of an application that are actually running on servers to provide you even greater benefit.

Gardner: I suppose, generally, what we are able to do now with virtualization is to match supply and demand with much more precision than we could in the past. In the past, we had to throw huge amounts of resources at a problem with brute force and sort of a blunt instrument approach in order to make sure that we could accommodate all sorts of demands and spikes and requirements,

Now, we are able to use virtualization to refine these supply-and-demand equations, so that we can pull resources at the infrastructure level, pull resources at the application level, and reduce a lot of waste and unnecessary or underutilized resource.

Banfield: Another thing that Tom was hitting on, besides this physical savings of the environment with power and air conditioning and things like that, is agility -- agility to market. As Tom was saying, you can now move applications and other things around. Your workforce becomes much more agile to address critical business needs in a very timely manner with virtualization. I think that’s key to our customers.

Gardner: So, if we want to move a whole new set of application to our Asia-Pacific operations and target a whole new set of customers there, the ramp up to doing that is much less time and much more something you can manage, rather than have to forklift upgrade, is that correct?

Banfield: Absolutely.

Gardner: As an analyst I get some questions frequently, and one of them I have to throw out to you guys, because it’s sort of an obvious one. Why would a company that makes a significant amount of money from hardware want to reduce the number of hardware instances? How does that help you, or what is the long-term implications that I am missing?

Norton: What happens is, as you are going to change in a platform, when you move from, say, individual instances of a device that sits in a branch office some place, and it’s maturing and it’s isolated, it’s disconnected in essence, because it’s separate from all the processes that have in the data center.

From a hardware perspective, it’s a great opportunity for HP, not only because we are changing some of these legacy platforms, as they will be sitting out in these remote offices, but we are enabling our customers to actually run on a much more effective and newer platform. It's a much more powerful platform, with direct connectivity to more powerful storage systems, and more powerful networks that run in datacenter.

It’s a plus for both. Our customers gain an advantage, because there are going to be savings overall in how much money they spend on that old equipment, how much maintenance cost they have, how much systems management they need to do for this device to sit out there or even sit in their data center, and have to be supported in much less efficient way.

We can save them money by moving them to more powerful, more efficient platforms. At the same time, it allows us to introduce our customers to these new devices, that provide them a wealth of benefits, from the performance perspective, on security, stability, and high availability. It’s a win for both organizations, along those views.

Gardner: Okay, let’s look at the actual announcements of Sept. 2. I'm going to break out one first, and that’s the desktop virtualization announcements -- virtual desktop infrastructure (VDI) solutions services, using Citrix XenDesktop.

Again, we're looking at a pretty radical shift in the types of end-user devices. We could start using some thin clients, and there is a security and risk reduction opportunity for bringing the data and applications-configuration information onto the server. The end users basically have a seamless environment. They're getting the same desktop and operating system that they’re accustomed to.

There are tremendous opportunities to save costs here. Let’s look before we drill into each of these announcements. Let’s just break out desktop, the virtual desktop and the infrastructure set. Tom, let’s go to you on that first. What’s the big deal here? What are we talking about, when we are going to reduce the amount of actual client-side activity vis-à-vis virtualization?

Norton: When our customers sit down and do a study, we help them look at the cost of managing client or end user devices in the field, not only from a help desk study but from a productivity study, from an application presentation viewpoint for the end user, the applications that they use and how they are presented is the heartbeat of business.

The data that they use is so sensitive, and so important to the organization as a whole. When they need help in keeping their productivity up, it can cost money or it can save the organization money. So, you look at changing somebody from a very insecure, volatile device in a remote environment that they use on a daily basis.

Gardner: So a local laptop for example.

Norton: A laptop, right. You can still create that rich experience that they are used to, but give yourself the security of knowing that the data that they are using is protected from theft and also protected, as far as archiving and search availability, from governmental regulations. They can give users some of that rich experience, but still have that protection. You can look at that device and understand the cost and complexity of either upgrading the device, presenting an application, or deploying an application to that device.

It’s extraordinarily expensive to do that and, if they can still get the experience of a more rapid presentation of the applications that they need to their job on a daily basis, both of those are incredibly valuable to the organization.

If you can get those two advantages, you are going to reduce help desk calls from your end user in the case of a disaster. If you have a notebook and it fails, for example, how do you get that person back up in working again, access the data they need, and access the applications they need?

You can accelerate that recovery. You are receiving enormous amounts of management and spending enormous amounts of money on every device every year. You can accelerate recovery and provide them the same rich experience that these new technologies allow us to do.

If you look at a virtual device now, you can say to the end user, "You will get the operating system that you need. You will get the application that you need. And, it will be in the environment that you expect to work in. You have the same user state you have had."

If you can combine all three of those in a virtualized environment, you are actually, in the end, providing more productivity for the end user, and, at the same time, cutting the management cost. You're also enabling yourself to cut other support costs in the organization, like how much money you spend to protect data, how much money you spent to restore data, or protect it from theft.

So there are enormous advantages to both, but it doesn’t work in every instance. If you have remote users who don’t have daily or hourly connectivity back to a host, it may not be to your advantage to use this technology there. But, for most organizations there is certainly a large part of their population that can take advantage of the technology.

Gardner: We've already seen a lot of this in use in some government organizations, particularly in intelligence and military communities, where they can’t take the risk of having an end device being lost or falling into the wrong hands. So, the stateless approach to computing is quite popular and proven there. Isn’t that right?

Norton: Right. You have a public sector, which is very sensitive, but you can imagine the same in terms of healthcare and financial organizations. You can extend that idea.

It may not just be sensitive data. It maybe repetitive tasks or frequent upgrades of applications. You have large segments of users who would have redundant equipment, and they have no need for a rich experience, but they may need an application refreshed on a predictable basis. This allows you to do that gracefully.

Gardner: Once again, this strikes me as aligning supply and demand -- what the end user actually needs in terms of resources, versus having the equivalent 20 years ago with a supercomputer on every desktop.

Norton: That’s correct.

Gardner: Let’s go to these announcements one by one very quickly, so we can give our audience a sense of the breadth and depth of this wave of addressing the virtualization issue. The first is HP Virtualization Accelerator Services. Greg, can you tell us quickly what this means?

Banfield: As we talked about, virtualization is a lifecycle, a journey. HP has Accelerator Services, which are predefined services from the consulting organization for customers to plug-in their module, to plug into where they are within their lifecycle. Because this is a lifecycle, customers could be at any point there, whether design or strategy. Maybe they're just starting, are half way through a project, or maybe towards the end of a virtualization project.

As we talked about, maybe the business outcomes weren’t exactly matched up with the original design, and they need some help in that area. Consulting integration comes with these Accelerator Services to help the customer through those difficult times or any point in the lifecycle to make sure that they are gaining the full value from their virtualization journey.

We can talk about each individual package, if you like, or a service, as we move forward.

Gardner: We'll come back to the services once we get through the major elements.

There are also the VDI services that we just discussed. I’ll just touch with Tom one more time on that. It seems to me also that with desktop virtualization, they were sort of getting the best of the old and the new. The old paradigm was centrally organized and managed, even back in the mini and mainframe days.

There were a lot of benefits to the organization for doing it that way, but the end user didn’t get the flexibility, the innovation, the freedom and flexibility, and so forth. Now, we're able to blend the two of them to get that centralized benefit for operations -- upgrades, maintenance, management, and aligning the supply and resources with the demand of the end user much more efficiently.

At the same time, we're giving users a Microsoft Windows desktop, where they can pick and choose, move, and get a lot of resources still using a browser. Am I off base here or are we really looking at the best of both worlds?

Norton: Absolutely. It does a number of things. Everybody uses PCs at home. The generation we are working with now has grown up with that equipment, so they are very accustomed to having a personalized work environment. They are used to having some flexibility to obtain applications and run them on their own devices.

They are accustomed to performance and also access to data, not having to wait for access, not having to wait for what historically has been a very slow change management process on mainframe based systems to add an application or change an application.

They are used to that agility, to that that high frequency of change. Up until now, many people have been resistant to make that move, because they don’t want to have that rich experience impacted. Now, you get that great benefit of having the rich experience, but, at the same time, you have the ability to take advantage of what consolidation means -- the predictability, disaster recovery, security, those types of developments which you never could get before in a more unpredictable world.

Through VDI, you really get that idea that you have the best of both, from a consolidation perspective as well as distributed to computing perspective.

We feel we are satisfying both ends. When we look at VDI, it’s kind of interesting. It touches both the back end systems and it touches the end user client. Sometimes with virtualization, people just think in terms of that back office, the server room, the datacenter transformation idea.

With VDI now, you have taken it and bridged that gap, to where you can do things on the desktop side, as you mentioned earlier, about taking advantage of thin clients. HP is producing some great thin-client technology. You can extend the life of current hardware, if you wish.

If you're mid-term in the lifecycle of a notebook or desktop, you are not really ready to retire it yet, but you don’t really want to spend a considerable amount of money to upgrade. You can extend the life of that device and make it more useful by combining that device with this type of technology.

At the same time, if you have a high performing device, this gives you the flexibility to just virtualize one application out to that device. So, it gives enormous flexibility on the front side.

On the back side, in the datacenter play, it allows you to do a lot of things. You can take advantage of all the benefits of blade technology, that whole idea we've discussed before about storage, virtualizing storage, and having better access to available storage.

You may run out of storage on a notebook device, but you can request and expand your storage capability on a storage area network (SAN) and go forward from there. So it’s unique in that it can address both the client side, the end-user facing side, and the efficiencies, and predictability, and performance that you want in a datacenter.

Gardner: Part and parcel of virtualization technology is the need for planning and ongoing management and then professional services, methodologies, and best practices. That’s why we have a number of virtualization support services announced as well.

I'll run down the list. HP Virtual Server Environment Solution Service. I assume this is about increasing energy, footprint, and resources.

HP High-Performance Computing Cluster Management Solution Service, HP Integrity/HP9000 Solution Service, HP Server Solution Project Management, HP Virtual Server Solution Planning and Design, HP Global Workload Manager Solution Services, and HP Virtual Desktop Infrastructure Solution Services.

Let’s go to Greg. What are the high points here? We don’t have too much time, so give us sort of an overview what these services involve and how comprehensive they are in terms of the whole series of virtualization opportunities that organizations will face?

Banfield: Because we're talking modular, all the services you mentioned are in a modular or tier fashion. Any one of these can be molded to the customer’s need, whether you have 200 servers or 1,000 servers.

The tiered approach makes it very easy for the customer to pick and choose what they need, depending on where they are in the lifecycle. They are predefined and data-sheeted, so the customer can read what they're going to receive from HP. These seven or eight different services address different points within the life cycle.

All these services do come with project management. Some customers, as I said, are half way through the lifecycle, or on their way, and maybe they just need a little help with project management types of activity. So HP can provide a PMI-certified person to come in there and help them, maybe just work with them to get the project back under control. Maybe it's off a little bit.

So any one of these is a great way for a customer to take a look at our solutions. Again, they are couched to be sort of a quick hit, easy to use. You don't have to just pick one service. If you have different needs, you can say, "I need to take the Virtual Server Environment Solution service, and I need Global Workload Manager to create my entire solution." Again, it's easy for the customer to understand, and then move forward with the project.

Gardner: Let’s look at how you can get started. These announcements are targeted at the U.S. initially, and you are taking it out globally during 2009, which isn’t that far away now. Tell us how an organization can get started and where they can develop this strategic overview of virtualization?

Norton: There are a couple of different ways that customers can engage to get this started. We can engage our customers to get this started first through our traditional sales organization. From a hardware perspective, we have our traditional enterprise account managers and their associated services client principals, the associated services managers who work on those accounts. That’s a very traditional way to engage with our technology teams, who provide these kind of services, both on a support perspective as well as a consulting services perspective.

But, there are other ways as well. You can work directly with our Microsoft Alliance members, you can work with our alliance teams. In this case, you're talking about Microsoft virtualization.

You can do it from a services perspective. Maybe you can take it from an enterprise account perspective, whether hardware or storage. Those channels should not change at all as far as how you would work with HP from a services perspective to get people to come in. If you look at those services, we set them up too. So, it's very easy for our customers to get engaged.

We are presenting services that allow the customer to get engaged, even if it’s a half-day workshop about virtualization. As Greg mentioned, we have strategies that can go two to three days. We have longer term proof of concepts that can go three to four weeks. We try to make it as easy as possible from a services perspective, and also from a sales perspective. We are very flexible.

These can even be introduced through our channels, where these can be sold through a channel and delivered by HP. We are trying to provide flexibility, as well as simplicity, in the services acquisition process for our customers, so that they don’t have to worry about who to talk to. When they need to talk, they can go directly to their traditional HP sales force and get introduced to these services.

Morgan: If I can also just add to that, once our customers began transitioning into production, they also should think about the people and the processes again. From a people perspective, we also announced on Sept. 2 some new education courses, which tie into what Greg and Tom were just describing.

For example, we have an education course on the HP Insight Dynamics BSC software. We have education courses on partition management, and we also announced an HP Virtualization Boot Camp, which covers global workload manager, and virtualization manager, capacity advisor, and a long list of technologies.

Customers should really think about getting their people trained in these technologies. And, from an ongoing operations perspective, we also announced some new software technical support services.

We already provide a lot of support services in the virtualization area, but what we're adding to that, is support for additional VMware products, such as VMware Workstation, VMware Lab Manager, VMware Site Recovery Manager, as well as the operational support for Citrix XenDesktop Server, and, of course, the new HP hardware, such as the HP ProLiant BL495C virtualization blade.

Gardner: Very good. I think we've covered a lot of territory here today, from vision down into actual product and service offerings. Clearly, this is going to be something that companies are going to be dealing with for a long time. We are already seeing forecasts now for virtualization to be growing broadly in the coming year. A 50 percent growth in 2008 alone, and 70 percent just in the previous two years.

So we appreciate everyone’s input, and wish you well on this series of announcements. We have been discussing virtualization at the application desktop and infrastructure server levels, as well as the road map and lifecycle management issues associated with that. We've been joined by Greg Banfield, a consulting manager for Hewlett-Packard, Consulting Integration Infrastructure group. Thank you, Greg.

Banfield: Thank you for having me.

Gardner: Dionne Morgan, worldwide marketing manager for HP’s Technology Services Group. Thank you, Dionne.

Morgan: You're welcome.

Gardner: And Tom Norton, worldwide practice lead for Microsoft Services at HP.

This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast with Hewlett-Packard on series of Sept. 2 announcements on enterprise virtualization products and services. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.