Showing posts with label Fisher. Show all posts
Showing posts with label Fisher. Show all posts

Wednesday, January 07, 2009

Webinar: Analysts Plumb Desktop as a Service as the Catalyst for Cloud Computing Value for Enterprises and Telcos

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.

Listen to the webinar here.

Jeff Fisher: Hello and welcome, everyone. Thanks so much for attending this Desktone Webinar Series entitled, “Desktops as a Service, The Evolution of Corporate Computing.” I’m Jeff Fisher, senior director of strategic development at Desktone, and I will also be the host and moderator of the events in this series.

We are really excited to kick off this series of webinars with one focused on cloud-hosted desktops and are equally as excited and privileged to have just a wonderful panel with us starting with Rachel Chalmers from the 451 Group, Dana Gardner from Interarbor Solutions, and Robin Bloor from Hurwitz and Associates.

For those of you who don’t know, Rachel, Dana and Robin are really three of the top minds in this emerging cloud-hosted desktop space. It’s going to be great to see just what they have to say about the topic and we’ll talk to them just a little bit later on.

Before we do that, I want to spend a little bit of time talking about Desktone’s vision and definition of cloud-hosted desktops and, most importantly, about why we believe that virtual desktops, as opposed to virtual servers, are really going to kick-start adoption of cloud computing within the enterprise.

Desktone is a venture-backed software company. We’re based outside of Boston in a town called Chelmsford. We raised $17 million in series A in a round of funding in summer 2007. Highland Capital and Softbank Capital led that round. We also got an investment at that time from Citrix Systems.

We’re currently about 35 full-time employees and have 25 full-time outsourced software developers. The executive team has experience leading desktop virtualization vendors such as Citrix, Microsoft and Softricity, and also experience running Fortune 500 IT organizations at Schwab and Staples.

We have a number of technology partners in the area of virtualization software, servers, storage and thin clients, and some key service provider partnerships with HP, IBM, Verizon and Softbank. What’s really important to note here is that Desktone actually goes to market through these service provider partners.

We don’t host virtual desktops ourselves, but rather the desktops as a service (DaaS) offering that we enable is provided through service provider partners. The only services that we host ourselves, or are offered directly, are trial and pilot services.

We built a platform called the Desktone Virtual-D Platform. It’s the industry’s first virtual desktop hosting platform specifically designed to enable desktops to be delivered as an outsourced subscription service.

And what’s important to understand is that this platform is designed specifically for service providers to be able to offer desktop hosting in the same way that they offer Web hosting or e-mail hosting.

We architected the Virtual-D Platform from the ground-up with that mission in mind. It’s a solution for running virtualized, yet genuine, Windows client environments, whether XP or Vista, in a service provider cloud. We’ll talk more about how we define a service provider cloud in a bit.

It leverages a core virtual desktop infrastructure (VDI) architecture, that is, server-hosted desktop virtual machines, which are accessed by users through PC remoting technologies like remote desktop protocol (RDP), for example. The Virtual-D Platform enables cloud-scale and multitenancy, which are two of the key things that a service provider needs to have to be able to be in this business.

Without getting into too much detail, it’s not really viable to take an enterprise VDI architecture or a product that’s been architected to deliver enterprise VDI and just port it over for service provider use.

It’s not viable for service providers to manage individual instances of VDI products. They really need a platform to manage this efficiently and effectively. The other key thing that the Virtual-D Platform does is separate the responsibilities of the user, the enterprise desktop administrator, and the service provider hosting operator, so that each of these constituents has their own view into the system, through a Web-based interface of course, and can do what they need to do without seeing functions and capabilities that are only really required by some of the other groups.

So, it’s a very, very different technology approach, although the net result appears in certain ways to be similar to some of the VDI platforms that you probably know pretty well.

So, that’s Desktone in a nutshell.

Promise of the cloud

Let’s get to the promise of the cloud. Clearly, everyone is talking about cloud computing. You can’t look anywhere within IT and not hear about it. It’s amazing to see it surpassing even the frenzy around virtualization. In fact, most of the conversations people are having today are around virtualization and how it can take place in the cloud. Everyone wants to focus on all the benefits, including anytime/anywhere access and subscription economics.

However, like any other major trend that unfolds in IT, there are a number of challenges with the cloud. When people talk about cloud computing with respect to the enterprise, in most cases they’re talking about virtualizing server workloads and moving those workloads into a service provider cloud.

Clearly, that shift introduces a number of challenges. Most notable is the challenge of data security. Because server workloads are very tightly-coupled with their data tier, when you move the server or the server instance, you have to move the data. Most IT folks are not really comfortable with having their data reside in a service provider or external data center.

For that reason Desktone believes that it’s actually going to be virtual desktops, not servers, that are the better place to start and are going to be what jump starts this whole enterprise adoption of cloud computing.

The reason is pretty simple. Most fixed corporate desktop environments -- those are desktops that have a permanent home within your enterprise – already probably have their application and user data abstracted away from the actual desktop. The data is not stored locally. It’s stored somewhere on the network, whether it’s security credentials within the active directory (AD), whether it’s home drives that store user data, or it’s the back end of client-server applications. All the back end systems run within your data center.

When you shift that kind of environment to the cloud, although the desktop instance has moved, the data is still stored in the enterprise data center. Now, what you are left with are virtual desktops running in a highly secure virtual branch office of the enterprise. That’s how we like to refer to our service-provider partners’ data centers, as secure virtual branch offices of your enterprise.

In addition, if you virtualize and centralize physical PCs, which used to reside in remote branch offices that have limited or no physical security, you’ve actually increased the security of the environment and the reasons are clear. The PC can no longer walk off, because it doesn’t have a physical manifestation.

Because users are interacting with their virtual desktops through PC remoting technology, a la Remote Desktop Protocol (RDP), you have control as an administrator as to whether or not they can print to their access device or through their access device, whether or not they can get access to USB devices, USB key fobs that they put into that device.

So, you can control the downstream movement of data from the virtual desktop to the edge. You can also control the upstream movement of data from the edge to the virtual desktop and stop malware and viruses from being introduced through USB keys as well.

Those are some really nice benefits. We have an animation that illustrates this, showing a physical desktop PC, which accesses its data from the enterprise data center – again whether it’s AD or user data or the actual apps themselves. Then, what happens is the actual PC is virtualized and centralized into a service-provider cloud, at which point it’s accessed from an access device, whether that’s a thin client or a thick client, a PC that’s been repurposed to act like a thin client, or a dumb terminal and/or laptops as well.

The key message here is that, although the instance of the desktop is moved, the data does not have to move along with it. Through private conductivity between the enterprise and the service provider, it’s possible to access the data from the same source.

Service-provider cloud

The other interesting thing is this notion of the service-provider cloud, which is that it actually can traverse both the enterprise and the service provider data centers.

So, depending on the use case, service providers can either keep the virtual infrastructure and the racks powering that virtual infrastructure in their data center or they can, in certain cases, put the physical infrastructure within the enterprise data center, what we call the customer premise equipment model. The most important thing is that it doesn’t break the model.

There is flexibility in the location of the actual hosting infrastructure. Yet, no matter where it resides, whether it’s in a service provider data center or an enterprise data center, the service provider still owns and operates it and the enterprise still pays for it as a subscription.

Let’s touch on just a couple of other benefits and then we’ll jump into talking with our panel. The Desktone DaaS cloud vision preserves the rich Windows client experience in the cloud. This is true, blue Windows -- XP or Vista -- not another form of Windows computing whether it’s shared service in the form of Terminal Services and/or browser-based solutions like Web OSes or webtops.

That’s important, because most enterprises have the Windows apps that they need to run and they don’t want to have to re-architect and re-engineer the packages to run within a multi-user environment. They certainly don’t want a browser-based environment where they can’t run those apps.

In the same vein, this sustains the existing enterprise IT operating model, while introducing cloud-like properties so that IT desktop administrators can continue to use the same tools and processes and procedures to support the virtual desktops in the cloud as they have done and as they will continue to do with their physical desktops.

We talked about the notion of separating service provider enterprise responsibilities. It’s really important to be able to draw a line and say that the service provider is responsible for the hosting infrastructure, and yet the enterprise is still ultimately responsible for the virtual desktops themselves, the OS images, the patching, the licensing, the applications, application licensing, etc.

And then, finally, this notion we mentioned of combining both on- and off- premise hosting models is important. I think most of the leading analyst firms agree that in order for enterprises to be able to adopt cloud computing they are not going to be able to go from a fully enterprise data center model to a full cloud model. There’s got to be some sort of common ground in between and, again, the fact this model supports both is important.

Now, let’s turn to our panel and see what they have to say. We’ll get started with Rachel Chalmers who is research director of infrastructure management at the 451 Group. She’s led the infrastructure software practice for the 451 Group since its debut in April 2000.

She’s pioneered coverage on services-oriented architecture (SOA), distributed application management, utility computing and open-source software, and today she focuses on data center automation server, desktop, and application virtualization. Rachel, thank you so much for being with us today.

Rachel Chalmers: You’re very welcome. It’s good to be here.

Fisher: Rachel, I actually credit you with being the first analyst to really put cloud-hosted desktop virtualization on the map and the reason is because you’ve written two really expansive and excellent reports on desktop virtualization. The first one you released in the summer of 2007. The follow-up one was released this past summer of ’08.

What I’ve really found interesting was that in the updated version you actually modified your desktop virtualization taxonomy to include cloud-hosted desktops as a first-class citizen, so to speak, alongside client-hosted desktop virtualization and server-hosted desktop virtualization. Of course that begs the question, what was so compelling about the opportunity that made you do that?

Taxonomy is key


Chalmers: Taxonomy is the key word. For those who aren’t familiar with The 451 Group, we focus very heavily on emerging and innovative technology. We do a ton of work with start-ups and when we work with public companies, it’s from the point of view of how change is going to affect their portfolio, where the gaps are, who they should buy. So we’re very much the 18th century naturalists of the analyst industry. We’re sailing around the Galapagos Islands and noting intriguing differences between finches.

I know we described cloud-hosted desktop virtualization as one of these very constructive differences between finches. When I sat down and tried to get my arms around desktop virtualization, it was just at the tail end of 2007 and, as you’ll recall, just as it’s illegal for a vendor to issue a press release now without describing their product as a cloud-enablement product. In 2007, it was illegal to issue a press release without describing a product as virtualization of some kind.

I was tracking conservatively 40 to 50 companies that were doing what they described as desktop virtualization and they were all doing more or less completely different things. So, the first job as a taxonomist is to sit down and try and figure out some of the broad differences between companies that claim to be doing identical things and claim to deliver identical functionality. One of the easiest ways to categorize the true desktop virtualization guys, as opposed to the terminal services or application streaming vendors, was to figure out exactly where the virtual machine (VM) was running.

So I split it three ways. There are three sensible places to run a desktop virtual machine. One is on the physical client, which gives you a whole bunch of benefits around the ability to encrypt and lock down a laptop and manage it remotely. One is to run it on the server, which is the very similarly tried-and-tested VM with VDI or Citrix XenDesktop method. That’s appropriate for a lot of these cases, but when you run out of server capacity or storage in the server-hosted desktop virtualization model a lot of companies would like elastic access to off-site resources.

This is particularly appropriate, for example, for retailers who see a big balloon in staffing – short-term and temporary staffing around the holiday seasons, although possibly not this year -- or for companies that are doing things off-shore and want to provide developer desktops in a very flexible way, or in education where companies get big summer classes, for example, and want to fire up a whole bunch of desktops for their students.

This kind of elastic provisioning is exactly what we see on the server virtualization side around cloud bursting. On the desktop side, you might want to do cloud bursting. You might even want to permanently host those desktops up in the cloud with a hosting provider and you want exactly the same things that you want from a server cloud deployment. You want a very, very clean interface between the cloud resources and the enterprise resources and you want a very, very granular charge back in billing.

And so, we see cloud-hosted desktop virtualization as a special case of server-hosted desktop virtualization. Really, Desktone has been the pioneer in defining what that interface should look like, where the enterprise data should reside, where AD, with its authentication and authorization functions, should reside, and what gets handled by the service provider and how that gets handled by the service provider.

Desktone isn’t the only company in cloud-hosted desktop virtualization, but it’s certainly the best-known and it’s certainly done the best job of articulating what the pieces will look like and how they’ll work together.

Fisher: Great.

Chalmers: It’s a very impressive finch.

Fisher: Always appreciate it. Dana and Robin, do you have any additional comments on what Rachel had to say?

New era in compute resources

Dana Gardner: Yes, I think we’re entering a new era in how people conceive of compute resources. To borrow on Rachel’s analogy, a lot of these finches have been around, but there hasn’t been a lot of interest in terms of an environment where they could thrive. What’s happening now is that organizations are starting to re-evaluate the notion that a one-size-fits-all PC paradigm makes sense.

We have lots of different slices of different types of productivity workers. As Rachel mentioned, some come and go on a seasonal basis, some come and go on a project basis. We’re really looking at a slice-and-dice productivity in a new way, and that forces the organization to really re-evaluate the whole notion of application delivery.

If we look at the cost pressures that organizations are under, recognizing that it’s maintenance and support, and risk management and patch management that end up being the lion’s share of the cost of these systems, we’re really at a compelling point where the cost and the availability of different alternatives has really sparked sort of a re-thinking.

And a lot of general controlled-management security risk avoidance issues require organizations to increasingly bring more of their resources back into a server environment.

But, if you take that step in virtualization and you look at different ways of slicing and dicing your workers, your users, if you can virtualize internally – well then we might as well make the next step and say, “What should we virtualize externally?” “Who could do this better than we can at a scale that brings the cost down even higher?”

This is particularly relevant if they’re commodity level types of applications and services. It could be communications and messaging, it could be certain accounting or back office functions. It just makes a lot of sense to start re-evaluating. What we haven’t seen, unfortunately, is some clear methodologies about how to make these decisions and boundaries inside of organizations with any sort of common framework or approach.

It’s still a one-off company by company approach -- which workers should we keep on a full-fledged PC? Who should we put on a mobile Internet device, for example? Who could go into a cloud-based applications hosting type of scenario that you’ve been describing?

It’s still up in the air and I’m hoping that professional services and systems integrators over the next months and years will actually come up with some standard methodologies for going in and examining the cost-benefit analysis, what types of users and what types of functions and what types of applications it makes sense to put into these different finch environments.

Fisher: Absolutely. I couldn’t agree more and I’ve always been one who talks about use cases. It all comes down to the use cases.

The technology is great, the innovation is great but especially in the case of desktop usage you really have to figure out what people are doing, what they need to do, what they don’t need to be doing at work but are currently doing, and that’s the whole notion of this how the consumerization piece fits in and personal life melds in with business life. You can say, well, this person doesn’t need to do that but if they are today you need to figure out how to make that work and how to take that into account.

So, I agree with you. It’d be fantastic to get to a world where there was just a better way to have better knowledge around use cases and which ones fit with which delivery models.

Chalmers: I think that’s a really crucial point. Just as server and work cloud virtualization have transformed the way we can move desktops and servers around, I see a lot of really fascinating work being done around user virtualization.

Jeff, you talked a lot about the issue of having user data stored separately from the dynamic run-time data. I know you’ve done a lot of work with AppSense within Merrill Lynch. There’s a group of companies -- AppSense, RTO Software, RES, and Sansa -- that are all doing really interesting work around maintaining that user data in a stateful way, but also enabling IT operators to be able to identify groups of users who may need different form factors for their desktop usage and for their work profile.

Buy side perspective

Gardner: We’ve been looking at this from the perspective on the buy side where it makes a lot of sense, but there’s also some significant momentum on the sell side. These organizations that are perhaps traditional telcos, co-location, or hosting organizations, cloud providers or some ecology of providers that actually run on someone else’s cloud but have a value-added services capability of some sort.

These are on the sell side and they’re looking for opportunities to increase their value, not just to small to medium-sized businesses but to those larger enterprises. They’re going to be looking and trying to define certain classes of users, certain classes of productivity and work and workflow, and packaging things in a new and interesting way.

That’s the next shoe to fall in all of this: the type of customer that you have there at Desktone. It’s incumbent upon them now to start doing some packaging and factoring the cost savings, not just on an application-on-application basis but more on a category of workflow business process work and do the integration on the back-end across.

Perhaps that will involve multiple cloud providers, multiple value-added services providers, and they then take that as a solution sell back into the enterprise, where they can come up with a compelling cost-per-user-per-month formula. It’s recurring revenue. It’s predictable. It will probably even go down over time, as we see more efficiency driven into these cloud-based provisioning and delivery systems.

So, there’s a whole new opportunity for the sellers of services to package, integrate, add value, and then to take that on a single-solution basis into a large Fortune 1000 organization, make a single sale, and perhaps have a customer for 10 or 15 years as a result.

Chalmers: It is a tremendously exciting opportunity for our managed-hosting provider clients. It’s the dominating topic of conversation at a lot of the events that we run for that group. Traditionally, a really, really great managed hoster that delivers an absolutely fantastic service will become the beloved number one vendor of choice of the IT operator.

If that managed hosting provider can deliver the same quality of service on the desktop, then they will be the beloved number one vendor of everybody up to and including the CIO and the CEO. It’s a level of exposure they’ve just never been able to aspire towards before.

Robin Bloor: I think that’s probably right. One of the things that is really important about what’s happening here with the virtualization of the desktop is just the very simple fact that desktop costs have never been well under control. The interesting thing is that with end users that we’ve been talking to earlier this year, when they look at their user populations, they normally come to the conclusion that something like 70 or 80 percent of PC users are actually using the PC in a really simple way. The virtualization of those particular units is an awful lot easier to contemplate than the sophisticated population of heavy workstation use and so on.

With the trend that’s actually in operation here, and especially with the cloud option where you no longer need to be concerned about whether your data center actually has the capacity to do that kind of thing, there’s an opportunity with a simple investment of time to make a real big difference in the way the desktop is managed.

Fisher: I totally agree. Thanks, Robin. All right. Let’s shift gears and talk to Dana Gardner. Dana is the president of Interarbor Solutions, and is known for identifying software and enterprise infrastructure trends and new IT business growth opportunities.

During the last 18 years he’s refined his insights as an industry analyst and news editor, and lately he’s been focused on application development and deployment strategies and cloud computing.

So Dana, you’ve been covering us for a while on your blog. For those of you who don’t know, Dana’s blog is called BriefingsDirect. It’s a ZDNet blog. You’ve covered our funding and platform launch, and some of our partner announcements, and we’ve had some time to sit down as well and chat.

In a posting this summer you wrote about Pike County– which is a school district in Kentucky, where IBM has successfully sold a 1,400-seat DaaS deployment. That’s something that we’re going to dive deeper into on a couple of the webinars in the series.

You’ve stated a broad affection for the term “cloud computing,” and all that sticks to that nowadays will mean broad affection, too, for DaaS. Can you elaborate on that?

Entering transitional period

Gardner: Well, sure. As I said, we’re entering a transitional period, where people are really re-thinking how they go about the whole compute and IT resources equation. There’s almost this catalyst effect or the little Dutch boy taking his finger out of the hole in the dike, where the whole thing comes tumbling down.

When you start moving toward virtualization and you start re-thinking about infrastructure, you start re-thinking the relationship between hardware and software. You start re-thinking the relationship between tools and the deployment platform, as you elevate the virtualization and isolate applications away from the platform, and you start re-thinking about delivery.

If you take the step toward terminal services and delivering some applications across the wire from a server-based host, that continues to tip this a little bit toward, “Okay, if I could do it with a couple of apps, why not look at more? If I could do it with apps, why not with desktop? If I can do it with one desktop, why not with a mobile tier?”

If I’m doing some web apps, and I have traditional client-server apps and I want to integrate them, isn’t it better to integrate them in the back-end and then deliver them in a common method out to the client side?

So we’re really going through this period of transformation, and I think that virtualization has been a catalyst to VDI and that VDI is therefore a catalyst into cloud. If you can do it through your servers, somebody else can do it through theirs.

If we’ve managed the wide-area network issues, if we have performance that’s acceptable at most of the application performance criteria for the bell curve of users, the productivity workers, we just go down this domino line of one effect after another.

When we start really seeing total costs tip as a result, the delta between doing it yourself and then doing it through some of these newer approaches is just super-compelling. Now that we’re entering into an economic period, where we’re challenged with top-line and bottom-line growth, people are not going to take baby steps. They’re going to be looking for transformative, real game-changing types of steps. If you can identify a class of users and use that as a pilot, if you can find the right partners for the hosting and perhaps even a larger value-added services portfolio approach, you start gaining the trust, you start seeing that you can do IT at some level but others can do it even better.

The cloud providers are in the business of reducing their costs, increasing their utilization, exploiting the newer technologies, and building data centers primarily with a focus on this level of virtualization and delivery of services at scale with performance criteria. Then, it really becomes psychology and we’re looking at, as you said earlier, the trust level about where to keep your data and that’s really all that’s preventing us now from moving quite rapidly into some of these newer paradigms.

The cost makes sense. The technology makes sense. It’s really now an issue of trust, and so it’s not going to happen overnight, but with baby steps, the domino effect, as you work toward VDI internally. If you work towards cloud with a couple of apps, certain classes of users, before long that whole dike is coming down and you might see only a minority of your workers actually doing things in the conventional client-server, full PC local run-time and data storage mode.

I think we’re really just now entering into a fairly transformative period, but it’s psychologically gaining ground rapidly.

Fisher: Yes, definitely. Rachel, Robin, any thoughts on Dana’s comments?

Psychological issues


Chalmers: I think that’s exactly right and I think the psychological issues are really important, as Dana has described them. One of the huge barriers to adoption of earlier models of this kind of remote desktop-like terminal services has been just that they’re different from having a full, rich Windows user experience in front of you.

The example people keep returning to is the ability to have a picture of your kids as your desktop wallpaper. It seems so trivial from an IT point of view, but just the ability to personalize your own environment in that way turned out to be a major obstacle to adoption of the presentation servers in that model.

You can do that in a virtual desktop environment. You can serve that exact same desktop environment to the same employee, whether she’s working from San Francisco or London. Because the VDI deployment model also has the same, yet better, features to that employee, it becomes much easier to persuade organizations to adopt this model and the cost savings that come along with it.

So, we underplay the psychological aspects at our peril. People are human beings and they have human foibles, and technology needs to work around that rather than assuming that it doesn’t exist.

Bloor: Yes, I’d go along with that. What you’ve actually got here is a technology where the ultimate user won’t necessarily know whether they’ve got a local PC. Nowadays, you can buy devices where the PC itself is buried in the screen.

So, it’s like they may psychologically, in one way or another, have some kind of feeling of ownership for their environment, but if they get the same environment virtually that they would adopt physically, they’re not going to object. Certainly some of the earlier experiences that users have had is that problems go away. The number of desk-side visits required for support, when all you’ve got is a thin client device on the desktop, diminishes dramatically.

The user suddenly has a responsibility for various things that they would do within their own environment lifted completely from them. So, although you don’t advertise it as this there’s actually a win for the user in this.

Chalmers: That’s exactly right and fewer desktop visits, fewer IT guys coming around to restart your blue screen or desktop, that translates directly into increased productivity.

Fisher: Yes, and what we like to talk about at Desktone is just this notion of anytime, anywhere. It’s one thing to get certain limited apps and services. It’s another thing to be able to get your PC environment, your corporate persona everywhere you go.

If you need to work from home for a couple of days a week, or in emergency situations, it’s great to be able to have that level of mobility and flexibility. So, we totally agree.

Now let’s move over now to Robin Bloor. He's a partner at Hurwitz and Associates. He’s got over 20 years experience in IT analysis and consultancy and is an influential and respected commentator on many corporate IT issues. His recent research is focused on virtualization, desktop management, and cloud computing.

Robin, in your post about Desktone on your blog -- “have Mac will blog” -- a title I love -- you mentioned that you were surprised to see the DaaS or cloud value prop for client virtualization emerge this early on. You mentioned that you found our platform architecture diagram to be extremely helpful in explaining the value prop, and I just would like you to provide some more color around those comments.

Tracking virtualization

Bloor: Sure. I really came into this late last year and, in one way or another, I was looking at the various things that were happening in terms of virtualization. I’d been tracking the escalating power of PC CPUs and the fact that, by and large, in a lot of environments the PC is hard to use.

If you do an analysis of what is happening in terms of CPU usage, then the most active thing that happens on a PC is that somebody waves their mouse around or possibly somebody is running video, in which case the CPU is very active. But it became obvious that you could put a virtualized environment on a PC.

When I realized that people were doing that, I got interested in the way that people were actually doing it and, there are a lot of things out there, if you actually look at it. It absolutely stunned me that a cloud offering became available earlier this year because that meant that somebody would actually have had to have been thinking about this two years ago in order to put together the technology that would enable such an offering like that.

So just look at the diagram and you certainly see why, from the corporate point of view, if you’re somebody that’s running a thousand desktops or more it’s a problem. It is a problem in terms of an awful lot of things but mostly it’s a support issue and it’s a management issue. When you get an implementation that involves changing the desktop from a PC to a thin client and you don’t put anything into the data center, it improves.

You’ve now got a situation where you don’t need cages in the data center running PC blades or running virtualized blades to actually provide the service. You don’t need to implement the networking stuff, the brokering capability, boost the networking in case it’s clashing with anything else, or re-engineer networks.

All you do is you go straight into the cloud and you have control of the cloud from the cloud. It’s not going to be completely pain free obviously, but it’s a fairly pain-free implementation. If I were in the situation of making a buying decision right now, I would investigate this very, very closely before deciding against it, because this has got to be the least disruptive solution. And if the apparent cost of ownership turns out to be the same or less than any other solution, you’re going to take it very seriously.

Fisher: Absolutely. Rachel, Dana, thoughts and comments?

Chalmers: I agree and I love this diagram. It’s the one that really conveyed to me how cloud-hosted desktop virtualization might work, and what the value prop is to the IT department, because they get to keep all the stuff they care about, all the user data, all the authentication authorization, all of the business apps. All they push out is support for those desktops, which frankly had been pushed out anyway.

There’s always one guy or gal in the IT organization who is hiking around from desktop to desktop installing antivirus or rebooting machines. Now, instead of that person being hiking around the offices, they are employed by the service provider and sitting in a comfy chair and being ergonomically correct.

Rational architecture

Gardner: Yes, I would say that this is a much more lucid and rational architecture. We’ve found ourselves, over the past 15 or 20 years, sort of the victim of a disjointed market roll out. We really didn’t anticipate the role of the Internet, when client-server came about. Client-server came about quickly just after local area networks (LANs) were established.

We really hadn’t even rationalized how a LAN should work properly, before we were off and running into bringing browsers in TCP/IP stacks. So, in a sense, we've been tripping over and bouncing around from one very rapid shift in technology to another. I think we’re finally starting to think back and say, “Okay, what’s the real rational, proper architectural approach to this?”

We recognize that it’s not just going to be a PC on every desktop. It’s going to be a broadband Internet connection in every coat pocket, regardless of where you are. That fundamentally changes things. We’re still catching up to that shift.

When I look at a diagram like Desktone’s, I say, “Ah-ha!” You know now that we fairly well understand the major shifts that have occurred in the past 20-25 years. If we could start from a real computer-science perspective, if we could look at it rationally from a business and cost perspective, how would we properly architect how we deploy and distribute IT resources? We’re really starting to get to a much more sensible approach, and that’s important.

Bloor: Yes, I would completely go with Dana on that. From an architect’s point of view, if nobody had influenced you in any way and you were just asked to draw out a sense of a virtualization of services to end users, you would probably head in this direction. I have no doubt about it. I’ve been an architect in my time, and it’s just very appealing. It looks like what Desktone DaaS has here is resources under control, and we’ve never had that with a PC.

Fisher: Well, that was great, and I really appreciate you guys taking the time to answer my questions. With the remaining 10 minutes, I’d like to turn it over for some Q&A.

The question coming in has to do with server-based computing app delivery with respect to this model.

This is something that comes up all the time. People say, “We’re currently using terminal services or presentation server,” which is obviously what we use for app deployment. How does that application deployment model fit into this world? And to kick off the discussion I’ll tell you that at Desktone we view what we’re doing very much as the virtualization of the underlying environment of the actual PC itself and the core OS.

That doesn’t change the fact that there are still going to be numerous ways to deploy applications. There’s local installation. There’s local app virtualization. There’s the streaming piece of app virtualization. And, of course, there’s server-based computing which is, by far, the most widely used form of virtualized application delivery.

And, not to mention the fact that in our model there is a private LAN connection between the enterprise and the service provider. In some cases, the latency around that connection is going to warrant having particularly chatty applications still hosted back in the enterprise data center on either Citrix and/or Microsoft terminal servers. So, I don’t view this as being really a solution that cannibalizes traditional server-based computing. What do you guys think?

Chalmers: I think that’s exactly accurate. You mentioned right at the front of the call that Citrix is an investor in Desktone. Clearly the VDI model itself is one that extends the application of terminal services from traditional task workers to all knowledge workers –those people who are invested in having a picture of their kids on the desktop wallpaper.

I think cloud-hosted desktop virtualization extends that again, so that, for example, if you’re running a very successful terminal services application and you don’t want to rip that out—very sensible, because ripping and replacing is much more expensive than just maintaining a legacy deployment of something like that—you can drop in XenDesktop. XenDesktop can talk quite happily to what is now XenApp, the presentation server deployment.

It can talk quite happily to a Desktone back-end and have all of its VDI virtual desktops hosted on a hosting provider. If you’ve got a desk full of Wall Street traders, it can also connect them up to blade PCs, dedicated resources that are running inside the data center.

So XenDesktop is an example of the kinds of desktop connection brokers you’re going to see, but as happy supporting traditional server-based computing or the blade PC model as they are being the front end for a true VDI deployment.

Bloor: Yes, I’d go along with that. One of the things that’s interesting in this space is that there are a number of server-based computing implementations that have been, what I’ll call, early attempts to virtualize the PC, and you may get adrift from some of those implementations. I know in certain banks that they did this purely for security reasons.

You know the virtualized PC is aa secure a server as a computer is. So you may get some drift from one kind of implementation to another, but in general, what’s going to happen is that the virtual PC is just the same as a physical PC. So, you just continue to do what you did before.

Fisher: Absolutely. I do agree that there definitely will be a shift and that again – back to the use cases -- people are going to have to say, “Okay, here are the four reasons we did server-based computing,” not, “We did server-based computing because we thought it was cool.”

Maybe in the area of security, as Robin mentioned, or some other areas, those reasons for deployment go away. But certainly, dealing with latency over LAN, depending on where the enterprise data center sits, where the user sits and where the hosting provider sits, there very well may still be a compelling need to use server-based computing.

Okay. We’ve got about five minutes left. There was an interesting question about disaster recovery (DR), using cloud-hosted desktops as DR for VDI, and this is a subject that’s close to my heart. It will be interesting to hear what you guys have to say about it. There is actually already the notion of some of our service-provider partners looking at providing desktop disaster recovery as a service. It’s almost like a baby step to full-blown cloud-hosted desktops.

Maybe you don’t feel comfortable having your users’ primary desktop hosted in the cloud, but what about a disaster recovery instance in case their PC blue screens and is not recoverable and they’re in some kind of time-critical role and they need to get back and up and running.

Or, as is probably more commonly thought of, what if they’re the victims of some sort of natural disaster and need to get access to an instance of the corporate desktop. What do you guys think about that concept?

Bloor: There are going to be a number of instances where people just go to this, particularly banks where, because of the kind of regulatory or even local standards they operate, they have to have a completely dual capability. It’s a lot easier to have dual capability if you’re going virtual, and I’m not sure that you would necessarily have the disaster recovery service virtual and the real service physical. You might have them both virtual, because you can do that.

This is just a matter of buying capacity, and the disaster-recovery stuff is only required at the point in time where you actually have the disaster. So, it’s got to be less expensive. Certainly, when you’re thinking about configuration in changed management for those environments, when you’ve got specifically completely dual environments, this makes the problem a lot easier.

Gardner: I think there are literally dozens of different security and risk-avoidance benefits to this model. There’s the business continuity issue, the fact that cloud providers will have redundancy across data centers, across geographies, the fact that there is also intellectual property risk management where you can control what property is distributed and how, and it keeps it centrally managed and check-in and check-out can be rigorously managed. And, then there’s also an audit trail as to who is there, so there’s compliance and regulatory benefits.

There’s also control over access to privileges, so that when someone changes a job it’s much easier to track what applications they would and wouldn’t get, in that you’ve basically re-factored their desktop from scratch that next day that they start the new job. So, the risk-compliance and avoidance issues are huge here, and for those types of companies or public organizations where that risk and avoidance issue is huge, we’ll see more of this.

I think that the Department of Defense and some of the intelligence communities have already moved very rapidly towards all server-side control and, for the same reasons that would make sense for a lot of businesses too.

Chalmers: Disaster recovery is always top of mind this time of year, because the hurricanes come around just in time for the new financial round of budgeting. But really it’s a no-brainer for a small business. For the companies that I talked to that are only running one data center, the only thing that they’re looking at the cloud for right now is disaster recovery, and that applies as much to their desktop resources as to their server resources.

Fisher: Great. Well, we are just about out of time so I want to close out. First, lots of information about what we’re doing at Desktone is up on our website, including an analyst coverage page under the News and Events section where you can find more information about Robin and Dana and Rachel’s thinking and – as well as other analysts.

We do maintain a blog at www.desktopsasaservice.com. We have a number of webinars up and coming to round out the series. We’ll be talking to Pike County, a customer of IBM’s and a user of the Desktone DaaS solution, and we’ll be speaking with our partner IBM. We’ll also have the opportunity to have Paul Gaffney, our COO, on a couple of our webinars as well.

So with that I will thank our terrific panel. Rachel, Dana and Robin, thank you so much for joining and for a fantastic conversation on the subject, and thank you so much everyone out there for attending.

View the webinar here.

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.

Monday, November 10, 2008

Solving IT Energy Use Issues Requires Holistic Approach to Efficiency Planning and Management

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on the critical and global problem of energy management for IT operations and data centers.

We will take a look at energy demand, supply, costs, and ways to develop a complete management perspective across the entire IT energy equation. The goal is to find innovative means to conservation so that existing facilities don't need to be expanded or replaced.

Good energy management is not as simple as just less hardware or better cooling, but it really requires an enterprise-by-enterprise examination of the "many sins" of energy and resources misuse.

In order to put into practice longer-term benefits, behaviors, and measurements, the whole picture needs to be taken into consideration. The goal, of course, is to promote a low-risk matching of energy supply and cost with the lowest IT energy demand possible.

To help us examine these important topics we’re joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group. Welcome to our podcast, Ian.

Ian Jagger: Thank you, happy to be here.

Gardner: We’re also joined by Andrew Fisher. He is the manager of technology strategy in the Industry Standard Services group at HP. Welcome, Andrew.

Andrew Fisher: Thank you, very much.

Gardner: Let's take a look first at the broad picture of larger trends in this whole energy equation. As I say, it’s not simple. There are a lot of moving parts, and there are a lot of mega tends and external factors involved as well.

I suppose the first thing to look at is capacity. I’d like to direct this to Ian. How critical is the situation now where large enterprises with vast data centers are actually facing an energy crisis?

Jagger: I think it's quite critical Dana. Data centers typically were not designed for the computing loads that are available to us today and they have been caught out. Enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Gardner: Now, at the most general level, is this a case where there is not enough electricity available or that the growth and demand of electricity is just growing so quickly, or both?

Jagger: I think it's both, and there is also a third level, which is how adequate is the cooling design within the data center itself. So, it is a question of how much power is available, of how much can be drawn into the data center, what is the capacity of the data center, and as I said, how that is cooled.

Gardner: We are also, of course, involving green concerns. There are issues around carbon and pollution, and mandates around these issues. We are also faced with regulatory issues and compliance that are of a separate nature, and many organizations are behaving more like service bureaus, where they have service level agreements.

So there is not too much wiggle room in terms of what needs to be adhered to from compliance and/or service levels. What are the variables that companies need to first start focusing on in order to better execute their management of energy?

Fisher: That's a good question. One of the most important things to understand is how they have allocated power within that data center. There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.

Gardner: This does vary from region to region, and HP being a global company, perhaps we should also take a look at the fact that in the United States, for example, there are limitations from the grid. The capacity of moving energy, even if it can be generated, is an issue, and in the U.K., apparently in the London area at least, there’s been somewhat of a lockdown in terms of use restrictions around the Olympics.

Ian, perhaps you could fill us in a little bit on some of the regional impacts and how this is supercritical perhaps in some areas more than others.

Jagger: I think you have just got it with the example you have used. It does vary region to region, depending on the capacity of the grid, the ability to distribute it along the grid and how that impacts customers geographically. It's not just about power distribution and generation, but it's also about the nascent situation with respect to compliance.

In Europe, we are now seeing countries, particularly the U.K., who have taken the lead in terms of carbon reduction. Legislation is coming on line, kicking in from 2010, but compliance requirements from 2009, where the top 5,000 companies or so, companies that use a given volume of energy or a value of energy, have to justify that usage in terms of purchasing carbon credits which are set against them.

Each of those companies -- and this includes HP U.K. -- need to establish what the energy usage is and show the roadmap -- how they can reduce that year over year towards the legislation that's in play there. It's only around the corner before that's applied in the U.S. too.

Gardner: Now, we recognize that this is a large problem. Many components -- I have heard the phrase “many sins” -- are involved. I wonder if either of you, or perhaps both, could fill us in a little bit about what are the types of past behaviors, approaches, mentalities, and philosophies about energy that need to be reexamined in order to get closer to where we need to go.

Jagger: I think the contrast among the silos between facilities and real estate and IT are based in the contradiction between cost and availability. You mentioned service levels earlier. From an IT perspective, that’s service level agreements to the business in terms of availability, the uptime of equipment. But, from the real estate perspective, the facility perspective, it's about cost control and CAPEX and OPEX with respect to the facility itself.

They have tended to operate in independent silos, but now the general problem we have, which is overriding both of those departments, is the cost of energy. Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Gardner: How about it, Andy? What sort of sins unfortunately have people overlooked as a result of lower energy cost in the past, but that really can't be overlooked now?

Fisher: First of all, it's a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve. So, there is rarely a single silver bullet to solve this complex problem.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself. Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs.

One of the biggest issues out there is that the industry, by and large, drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money. So we have at HP a wide range of capabilities, including our EYP Mission Critical Facilities Services to help you analyze those operational issues as well as structural ones, and make recommendations, in addition to products that are more efficient as well.

Gardner: You raise a couple of interesting points. It's hard to fix something that you can't measure. What are the basic measurement guidelines for energy use?

I have heard of Defense Council on Integrity and Efficiency (DCIE). There is also a Power Usage Effectiveness (PUE). How does a large organization start to get a handle on this? As you say, or it has been mentioned, it's a siloed problem in the past, now it needs to be tackled head on?

Jagger: You have touched on the principal benchmarks that go through the industry there -- the PUE and the Infrastructure Efficiency Ratio, which is the inverse of the PUE. Put very simply, the PUE would be the total power coming into the data center over the amount of power required for computing purposes. So how efficient is that? How efficient is the data center and service of overall power that is required for computing?

In other words, if you need one kilowatt for computing, and your PUE is two-and-a-half, than you need to be bringing 2.5 kilowatts to the wall to be able to run those computers.

They are not perfect, and there are industry bodies that are looking to drive greater elements of perfection out of this. So for example, PUE is a Green Grid Rating System that is generally used, but Green Grid themselves are looking to migrate through the inverse ratio of the data center infrastructure and efficiency ratio, and use that going forward before they can develop the next level.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy.

Gardner: That dovetails, of course, with a number of other initiatives we have underway, such as virtualization, application modernization, winnowing out apps that aren't being used very much. Service-oriented architecture (SOA) encourages reuse and making sure that common services are supported efficiently.

There is also data center unification and modernization of hardware. All these things come together and ultimately increase utilization, which then changes the energy equation.

The question is how do we make these things work in concert? How is there some coordination between getting the right mix on energy along with some of these other initiatives? Why don't we start with Ian on that?

Jagger: They feed off each other. If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria. You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management.

But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well. I guess Andy would have some thoughts there.

Fisher: There is a wide range of opportunities. Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So, simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Gardner: Once these organizations start hitting the wall on energy, it behooves them to look at some of these other initiatives, rather than just saying, “Wow, we need another data center at 10, 20, maybe 100 million dollars.” Is that more the philosophy here -- be smart not big?

Fisher: Absolutely. There is a substantial opportunity to extend the life of your data center, and I recommend that you give HP a call and talk to us here. We have a wide range of things that we can help with.

Ian can talk to the services here in a second, but from a product perspective, we’re bringing to market new capabilities in terms of efficiency of the platforms to help you reduce that total energy consumption of the IT equipment itself. We’re also working on unique ways of reclaiming existing capacity. Instead of having to build another 50 or 100-million-dollar data center, you can live longer in the data center that you have.

Gardner: I suppose one of the fundamental shifts recently with the cost of energy going up considerably is that the return on investment (ROI) equation shifts as well. If I were selling systems I need to know, given the harsh economic climate, that I have a good ROI investment story -- that if you invest $10, you can save $15 over X amount of time. The energy factor now plays a much larger role in that.

Perhaps, Andy, you could tell us a little bit about how the cost of energy, instead of an afterthought, is now a fore thought, when it comes to these -- whether it’s worth these modernization efforts.

Fisher: We look at it both from an OPEX, or your monthly cost of electricity -- and that’s rising rapidly, as the cost of energy goes up -- as well as from a CAPEX perspective, with your investment in your data center.

The first thing is to optimize your CAPEX investment, the money you have already sunk into your data center. You want to make sure that from an investment perspective you don't have to lay out another huge chunk of money to build another data center. So, number one, we want to optimize on the CAPEX side and make sure that you are using what you have most effectively.

But, from an operational cost perspective, it's really about reducing your total energy consumption. You can approach that initially from optimizing the energy use of your IT equipment itself, because that is core to the PUE calculation that we talked about.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

Otherwise, there are opportunities… We’ve introduced products that help you optimize your cooling, which typically can be up to 50 percent or more of your total energy budget. So by making sure that you fine tune your cooling to meet your actual demand of your IT you can make substantial reductions on your monthly electric bill.

Gardner: Now, how does the Adaptive Infrastructure relate to this as well? It seems that would also be a factor in some of these equations?

Fisher: We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

Gardner: Let's go back to Ian. It seems that, as with many areas like manufacturing or application development, the history has been that you build it and then you throw it over the wall and someone has to put it into production or build it.

I expect that maybe data centers have had a similar effect when it comes to energy. We set up requirements. We build based on performance requirements. And then, oh, by the way, energy issues come as an afterthought.

Is that true and is that the outmoded method, and are we now, in a sense, building for energy conservation from the get-go? Has it become more of a city- or town-planner mentality, rather than simply an architect approach. What's the mindset shift that's taking place?

Jagger: That's a good question. I think you have to address it at all the levels you talked about. At the company level or the enterprise level, you are absolutely right. That has been the mentality or the approach, we need a data center, and we base it where we are. Nothing else matters. Base it adjacent to us.

Energy costs or supply have not been a consideration. Now they are. That's on the basis that you don't have any other complexities coming at you. But, if you are just looking at the strategy for your data centers in terms of business growth and your capacity, storage, and availability requirements that you have going forward, and you do the math, you can understand the size of the data center you need and how that works with respect to virtualization strategies and so on.

On top of that, we have the latest complexities, where you simply don't have the forward view on things. In just the last few days we’ve seen, for example, Wells Fargo buying Wachovia. I’m not sure how many data centers are within those two organizations, but you can bet they are in the scores. Suddenly, we have real estate and IT managers who are scratching their heads thinking, “How on earth do we bring all this together. There are different approaches now being taken at the enterprise level.

At the architects’ level, it would be irresponsible for an architect today not to build energy efficiency into a green field building or any building, not just a data center. It’s pretty much been established that it just makes sense if you are designing a new building to be building energy efficiency into it, because your operating costs will far outweigh the capital expenditure on those building rather quickly.

I’m not sure how a company like HP can influence at the planning level, but where we can influence is at the industry level and at the governmental level. We have experts within the company who sit on think tanks and governance boards. We advise bodies like the EPA. We sit with the leading organizations in energy building design, and discuss how governance with respect to green building design can be built and can be moved forward within the market.

That's how we can start to influence at the industry level in terms of having industry standards created, because if the industry doesn't create them itself, then governmental bodies will do it.

Gardner: It also seems that because it's so difficult to predict all the variables, that a need for modularity has emerged in the data center design, so that the end result can be amended and adjusted without all the other parts being interconnected and brittle. It’s similar to software, where you would want to have modularity in software, so you gain flexibility and it’s not too brittle. Can you explain more deeply how that relates to best energy management practice?

Jagger: The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

What has gone on in the past and today is that as an enterprise you may have a requirement for a Tier 4 level of structure, with respect to the data center, which is putting out at 100 watts per square foot, for example. Let’s say, for the sake of argument, that's a 100,000-square-foot data center, but you don't need all that data center infrastructure at a Tier 4 topology.

If you look at how you’re going to structure your virtualization program, you may only need 50 percent of it at Tier 4 for high density computing, and the rest of it can be at a Tier 2 level.

If that were the situation, you would be clearing roughly 25 percent of your capital costs on building that data center. Just doing simple math, if you are looking at 100,000 square feet, that's in the region of $40 to $50 million. So, there are some clear consequences of moving to a hybrid tiered or a modular model.

Gardner: Are there some examples out there that you can give us? It would be great if you could name some companies, or at least give us use-case scenarios where organizations have adjusted, adopted some of these practices, implemented some of these standards, used common measurement practices, and have resisted having to spend $40 million on CAPEX, but also perhaps utilizing their existing resources even better.

Jagger: I think HP is the biggest example. We are the biggest example of designing modularity into our own data centers.

Beyond HP, you could look at supercomputing centers, high density computing -- the Internet service providers, the Googles of this world, and Microsoft themselves. The companies that require high-level resolve, high density and supercomputing typically are moving in this direction. We are pioneering this with our in-house capabilities. We are at the leading edge of this level of innovation.

Gardner: Let's take a look forward a little bit. What can we expect? Obviously, this makes more sense over time. Green issues are going to become more prevalent. Carbon is going to become more regulated. Costs are going to become prohibitive for waste, and the amount of data moving around increases all the time.

Perhaps you can explain the roadmap, the future, some of the concepts around optimizing data centers -- without pre-announcing things, but at least, give us a sense of what's coming.

Fisher: How about if I talk to that one first. One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

In the HP implementation, it's a very simple kind of layout. You just have a single row of 50 U racks. I believe there’s something like 22 of them in this 40-foot container. There’s a single hot aisle and a single cool aisle, with overhead cooling that takes the exhaust hot air from the back and cools it and delivers it to the front.

Using the HP POD you can install any standard equipment into the 19-inch racks and build out a very efficient data center that has a very low PUE or a leading PUE, from a cooling perspective. So that's yet another option in the HP side.

From the product side of HP here, one of the biggest things we’re seeing is that power and cooling capacity is allocated by facilities in a very conservative manner. It's hard to understand exactly how much energy is required for each individual server or blade enclosure. So, there’s typically quite a bit of a conservative reserve that is allocated on top of what's probably actually being consumed.

In fact, if it's in the purview of the facilities team to allocate that power, they would treat it as any piece of electrical equipment and they would just look at what the max power rating or requirement is for the piece of equipment. What we’re seeing is that this can actually overstate the power requirement by up to three times what is actually needed.

So, there’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

Very soon, you’re going to be hearing some exciting news from HP about how we’re going to provide the opportunity for fine-tuned control of exactly how much power the servers in the IT racks are going to actually use.

Gardner: So, not only are we moving toward modularity at a number of levels, we’re bringing more intelligence to bear on the problem?

Fisher: Yes. A key to addressing this problem is to have accurate measurement and the ability to have predictability and control of the actual power consumption of the core IT equipment that the whole infrastructure is supporting.

Gardner: Alright. How about a roadmap, from a strategic point of view, of methodologies and best practices. Ian, what new innovations can we expect along those lines?

Jagger: In all this complexity, it's a relatively simple path to follow. It all starts with discovery -- where are we today? Given what we know about business direction, where do we need to get to? What do we need to be capable of from a business technology perspective that incorporates a facility as a holistic or a hybrid view of those departments combined? What is it that they need to produce to support the business going forward?

Then, you have a gap. The next question is how do we fill that gap, how do we get there? Various strategies can accrue from that, depending on what your needs are.

We would look at that with customers, and we would sit down with them and ask them some pretty basic questions. Do you need to be where you are today? If you are in Phoenix, does the data center needs to be in Phoenix or could it be in Washington state? It’s cooler and you don't therefore have the energy costs that you would in Phoenix. So, let's have a look at that.

What is your position from a corporate social-responsibility perspective with respect to the environment? How visible are you in addressing that in comparison to your industry peers? What are the pressures on you to do that? So, let's have a look at alternate energy sources with respect to your data center.

For example, we have just announced our San Diego facility, which is now powered by solar panels. We are involved quite heavily right now in Iceland, providing geothermal technologies for data centers. So, a question there would be, can you be in Iceland? One issue there would be the question of latency. There are several questions that you would ask in terms of direction and how to get there.

Having answered those, you would move into planning and design phases and we would address those at that point too. We would build into the operation of any given new sites, or retrofitted site, the processes with respect to service management across the facility and IT structures. Service management is now not only about IT, but it’s about the facility as well, and how that is brought together in one motion.

So, it's pretty much a simple lifecycle approach within a complex field, and that will get you there. Along the process, we would be able to give the orders of magnitude of cost and typical ROI based on the strategies that you are looking to undertake.

Gardner: It certainly sounds like being efficient and getting this larger management capability over energy and facilities and resources is becoming a core competency and not an option. Is that fair to say?

Jagger: Yes. I think the spin on that is, going back to the example I just used of Wells Fargo and Wachovia, who do you turn to who can help you with that? You don't face that every day of your life, either within facilities or within IT, and you need help. You need to reach out for where the help is.

Traditionally, in our industry, as we have been discussing, it has tended to be siloed into real estate and into IT. What’s now required is the holistic view of infrastructure. I mean the physical infrastructure and the IT infrastructure. Customers need to reach out to firms that they feel comfortable reaching out to.

I think it was Andy who actually conducted this survey -- so correct me if I’m wrong, Andy. We recently undertook a survey in each of our worldwide regions, all enterprise customers. The finding was that the more the customers themselves had issues that they needed to address with respect to the environment and energy the more likely it was that they were going to come to HP as their vendor of choice.

Fisher: That's correct.

Gardner: Well, clearly if you don't have the holistic view you are going to have to learn how to get one, right?

Fisher: Right.

Gardner: Ian, let me direct this to you. I suppose there is some thought around environmental benefits and green IT, in which people believe that this is an additional cost or an expense. It seems to me, though, from what we have been discussing, that moving towards good environmental practices is actually moving towards good energy management practices too.

Jagger: That's absolutely right. It is not a choice of one or the other. Now, the business outcomes that come from energy management are also environmental outcomes, but there are apparent barriers to implementing environmental solutions, which, as you just, said are actually energy management solutions. Primarily they revolve around the lack of identifiable ROI or the payback period around any green improvement and then the measurements of that improvement itself.

More recently, we’ve been able to show customers the typical examples of how they can move through that environmental curve or that energy management curve going back to the industry standard benchmarks of PUE.

By showing them what a rough order of magnitude cost would be to move them grade by grade through the ranking system of energy efficiency, we show them what that cost would be, what the return would be, as a result of that in terms of carbon savings, in terms of dollar savings, and what the payback period would be based again on those dollar savings.

So, we can have a very strategic, yet tactical, view on how to approach this. A customer can take a larger view in terms of how far they want to go with their environmental approach and balance that with their energy-management approach.

There is obviously a curve here. The larger the investment in improving energy management, then the greater the return. At some point, that return slows down, because of the amounts of actual investment you have put in. So, there is a curve there, and we can show you how to get to any point along that curve.

Gardner: Excellent! We have been discussing the large global problem around energy management and how it has become more critical for IT operations -- energy not as an afterthought, but really the forethought and an overriding stratagem for how to conduct business in IT.

I want to thank our guests today. We have been joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in HP's Technology Solutions Group. Appreciate your input Ian.

Jagger: You’re very welcome, Dana, I am happy to have taken part.

Gardner: Andrew Fisher, the manager of technology strategy in the Industry Standard Services group at HP. Thank you, Andy.

Fisher: You are welcome.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

For more information on energy-efficiency in the data center, read the whitepaper.

For more information about HP Energy Efficiency Services.

For more information on HP Thermal Logic technology.

For more information on HP Adaptive Infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.