Showing posts with label Ian Jagger. Show all posts
Showing posts with label Ian Jagger. Show all posts

Friday, October 09, 2009

IT Architects Seek to Bridge Gap Between Cloud Vision and Reality

Transcript of a sponsored BriefingsDirect podcast discussion on properly developing a strategy for cloud computing adoption.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on properly developing a roadmap to cloud computing adoption in the enterprise. The popularity of the concepts around cloud computing have caught many IT departments off-guard.

While business and financial leaders have become enamored of the expected economic and agility payoffs from cloud models, IT planners often lack structured plans or even a rudimentary roadmap of how to attain cloud benefits from their current IT environment.

New market data gathered from recent HP workshops on early cloud adoption and data center transformation shows a wide and deep gulf between the desire to leverage cloud method and the ability to dependably deliver or consume cloud-based services.

So, how do those tasked with a cloud strategy proceed? How do they exercise caution and risk reduction, while also showing swift progress toward an "Everything as a Service" world? How do they pick and choose among a burgeoning variety of sourcing options for IT and business services and accurately identify the ones that make the most sense, and which adhere to existing performance, governance and security guidelines?

It's an awful lot to digest. As one recent workshop attendee said, “We're interested in knowing how to build, structure, and document a cloud services portfolio with actual service definitions and specifications.”

Well, here to help us better understand how to begin such a responsible roadmap for a cloud computing adoption we're joined by three experts from HP. Please welcome with me, Ewald Comhaire, global practice manager of Data Center Transformation at HP Technology Services. Welcome, Ewald.

Ewald Comhaire: Hi, Dana.

Gardner: We're also joined by Ken Hamilton, worldwide director for Cloud Computing Portfolio in the HP Technology Services Division. Welcome, Ken.

Ken Hamilton: Thanks. It's great to be here.

Jagger: And, we're also joined by Ian Jagger, worldwide marketing manager for Data Center Services at HP. Welcome, Ian.

Ian Jagger: Hey, Dana. Equally happy to be here.

Gardner: Cloud is an emerging and, frankly, a huge phenomenon driven, I think, by economics and the need for business transformation, as well as new IT service delivery capabilities. I wonder what implications this has for IT leaders. They're in the trenches, so to speak, and they are trying to make cloud happen. Ewald, how are these folks reacting right now?

A new benchmark

Comhaire: Independent of how we define cloud -- and there are obviously lots of definitions out there -- and also independent of what value cloud can bring or what type of cloud services we are discussing, it's very clear that the cloud service providers are basically setting a new benchmark for how IT specific services are delivered to the business.

Whether it's from a scalability, a pay-per-use model, or a flexibility and speed element or whether it's the fact that it can be accessed and delivered anywhere on the network, it clearly creates some kind of pressure on many IT organizations. The pressure really comes from three different angles.

The first one is, whether IT organizations should allow their businesses to consume directly those cloud services without any control by IT. Or, should IT basically build up some, what we call, service brokering capabilities, where they can select the cloud services that are of value to the company. Then, maybe they wrap some additional value around them and then offer those through the business, with IT being basically the broker of these capabilities.

The second pressure is, if you don't want it to be outsourced, should you compete and compare yourself to a cloud service provider and transform your own internal IT operations to function more like an internal type of cloud with the same benchmarks on flexibility, scalability and what have you.

Third, of course, is that you will not have everything in the cloud, and some companies may also

First of all, in the cloud, everything is about the service first, and then the technology will follow the service. That's something that companies will have to think about is.

think about becoming a cloud service provider themselves. This is the third element, if your company requests you to play a role in becoming a cloud service provider. How, as an IT organization, do you support that request coming from the business?

So, all in all, there are quite a few questions that the businesses have about their IT organization, which then lead to our recommendation that the company should think about building out the cloud strategy.

Gardner: I'm curious, Ewald. Is this something new in terms of how they approach the problem or are there some precedents that they can go? Perhaps there are procurement precedents and how they secured partners or vendors in the past, outsourced in the past, or if they've done any work with software as a service (SaaS). Where do they start in terms of looking to the past in order to leapfrog into the future?

Comhaire: There are some things that are unique about the cloud. First of all, in the cloud, everything is about the service first, and then the technology will follow the service. That's something that companies will have to think about.

First, think in terms of the services they want to deliver and then fill in those services with the right technologies that achieve the goals of the service. Some companies may already have quite some expertise there. For example, they may have started to build an internal shared services environment, which may already have that service-oriented thinking behind it, but architecturally their services may not have reached the attribute that a typical cloud service has.

Working on the architecture

T
hese companies will have tremendous benefits on the thinking model, the organizing for a service centric delivery model, but they may need to just work a little bit on the architecture. For example, how can they address scalability and the way that supply and demand are aligned to each other, or maybe how they charge back for some of these services in a more pay-as-you-go way versus an allocation based way.

These companies will already have a big head start. Of course, if you're working on an internal cloud, have things like virtualization in place, have consolidated your environment, as well as putting more service management processes in place around ITIL and service management, this will benefit the company greatly. We'll want to have the cloud strategy rolling out in the near future.

Gardner: Now, I'm aware that HP is doing quite a bit of work around cloud. I wonder what you are finding in the field, as you talk to these customers. Ian, what are some of the top reasons that you are hearing from these folks as they're trying to educate themselves, and grapple with this transition? What are the top reasons that you are seeking to do this in the first place?

Jagger: That's an interesting question, because I think the pressure is coming from two sides. One is from the business side in terms of what cloud means for the business, how a business can become more competitive and take a leading gain within the marketplace in which they operate. From the IT side, there is a required reaction of being proactive about looking at what cloud is all about and how cloud can bring advantage ultimately through the business.

So, there are two thoughts. One is that the business wants to gain a competitive advantage, and two, it is up to IT to do something about that. As Ewald was just saying, it could be that the business is already reaching out to services that are within the cloud and accessing those services. There is an onus on IT for them to keep their own shop in order and become the provider themselves of those services as a service.

Gardner: And, if they're looking for this increased agility, I suppose that also would cut out the need for a lot of upfront capital expenditures. It seems that in a down economy you'd want to be able to be agile, but without having to go through a long budget process and jump through a variety of hoops in order to get the funding. I suppose the pay-as-you-go model makes some sense, now more than ever.

Jagger: That's right, and it certainly does cut out some hurdles. If there are critical applications that you seek for your business, and they're available through the cloud, either from a service provider or through the shared services model that Ewald talked about, that's going to be far more efficient and cost-effective, subject to ... terms of the pay-per-use and security. But, once security is addressed, there are definite cost and efficiency advantages.

Gardner: Ken, we've heard for a number of years about the need for agility, something that's not necessarily new and I am sure won't go away. We'll be hearing about the need for agility 10 or 20 years from now. But, are there any specific business drivers for this interest in cloud, any business outcomes that people most frequently expect to happen as they move toward this model?

Growing interest

Hamilton: As we pointed out, we're seeing a growing interest in cloud specifically around cost savings. Certainly, in this economy, cost savings and switching from a capital-based model to an operational model, with the flexibility that implies, is something that a number of companies are interested in.

But, I'd also like to underscore that, as we've discussed, the definition of cloud and the variety of different, and sometimes confusing possibilities around cloud, are things that customers want to get control of. They want to be able to understand what the full range of benefits might be. The major ones we see right now are underscored by cost savings and financial flexibility in going from a capital-based model to an operational model.

By the same token, we're finding that they're also interested in exploring far-reaching benefits. As Ewald pointed out, some companies who have not traditionally been in the “technology business,” may find that cloud offers them the opportunity to be able to change their business model.

We're dealing with a couple of telecommunications providers, and even some financial institutions in Asia Pacific Japan (APJ), who are looking cloud as an opportunity to be able to expand their market and to be able to take on new capabilities, in addition to some of the economies of scale that come out of this.

In addition to that, we touched on faster time to market and agility. In a typical internal

So, cost savings as well as agility and new business capabilities really are the three main types of benefits that we are seeing customers go after.

environment it may take weeks or months to deploy a server populated in a particular fashion. In that same internal cloud environment that time to market can be as little as hours or minutes, along with some of the increased functionality.

So, cost savings as well as agility and new business capabilities really are the three main types of benefits that we are seeing customers go after.

Gardner: Ken, doesn't this in some ways fundamentally shift the role of IT? It seems, based on what you just said, that IT's role is now about betting on those who are supplying services, perhaps crafting a service-level agreement (SLA) or even getting at the contractual negotiations and then monitoring the quality of those services.

Is that a different role for IT? In the past, they would have simply defined requirements and then delivered the services themselves.

Hamilton: I wouldn't say it's a different role, but rather a greatly enhanced role. IT does procure services today and does a fairly decent job of it. Because of the service orientation, this puts a greater emphasis on understanding not just the technological underpinnings, but the contractual service level elements and the virtual elements that go with this.

A number of implications

As we know, virtualization is a big part of this, but when virtualization comes into play, there are a number of different implications for the service structure, availability, security, risk, and those types of elements. So, there is a greater emphasis today in terms of laying those out, defining them, and procuring the right structure to be able to meet their specific business needs.

Gardner: I suppose their concern, then, is defining the requirements and then measuring the outcome, rather than being concerned what that middle phase would be -- how you actually do it. Is that fair?

Hamilton: Right. If there is a big shift, it's moving from the orientation of a physical infrastructure, dealing with the products, and then delivering that as their service to a grouping of capabilities with service levels, particularly involving external service providers in that level of reliability or acting as an external service provider. So, there is a greater level of expectation on the part of customers for that level of service delivery.

Gardner: Going back to the market data that we've gathered through these HP workshops on cloud adoption and data-center transformation, Ewald, as folks now start to proceed in this direction toward cloud model, recognizing some shifts, looking for those agility and monetary benefits, what's holding them back? What do they say are their top inhibitors from pursuing that?

Comhaire: Dana, that's a very interesting question. We often talk about all the benefits, but obviously, specifically for our enterprise customers, there's also an interesting list of inhibitors. In every workshop that we do, we ask our participants to rank what they believe are the biggest inhibitors, either for themselves to consume cloud services or, if they want to become a provider, what do they believe will be inhibiting their potential customers to acquire or consume the services that they are looking for? Consistently, we see five key themes coming as major inhibitors.

A lot of companies have value chains that they've built, but what if some of the parts of that value chain are in the cloud? Have I lost too much control? Am I too much dependent?



The first one is about the loss of control. That means I am now totally dependent on my cloud-service provider in my value chain. A lot of companies have value chains that they've built, but what if some of the parts of that value chain are in the cloud? Have I lost too much control? Am I too much dependent?

There are some interesting stories that are absolutely real. A satellite service provider, for one of its key component in bringing the television signals to your home, was relying on a cloud service provider that was taken over by one of their competitors to that satellite company.

As a result, suddenly there were interruptions in the service quality and, of course, this leads into the reality that there could be a loss of control if you have parts of your value chain out in the cloud. That's a real concern, not a perceived one.

Also, there is lack of trust in your cloud service provider. That could have to do with the question of whether they'll still be in business five years from now, and will I have to take back control, if they go out of business. We know that this existed in the dot-com world and cost the companies a lot of money by bringing it back in-house when the provider went out of business.

It could also have to do with the things like price-hikes. What guarantees for me, for example, that the price of a CPU per hour wouldn't double, once the provider has achieved a big enough critical mass. They can decide to double the cost at any moment in time.

Security and vulnerability

Another area that's called out frequently is the whole security and vulnerability case. Some of that is perceived. If you architect it well, best-practice cloud-service providers can do a great job of actually being more secure than a traditional enterprise dedicated environment.

But, still, there are some things that are more difficult to control. For example, you become more vulnerable to attacks, when you are on the Internet. A well-known service provider becomes a natural source of all kind of attacks. Of course, that's more difficult to prevent architecturally. It's something you will have to deal with as a cloud service provider, and something that consumers will have to take into account.

There are also difficulties around identity management and all of the things to integrate security between the consumer and the provider that are an additional complexity there.

Confidentiality is the fourth reason and mostly concerning the data, because, more and more, the data goes with the service. What guarantees do we have, for example, that an employee at a service provider can't take that data and sell it to a government or some other third party.

So, this whole reliability concern and disaster recovery is obviously a key element, along with one that comes out more and more in recent workshops: How do I integrate with the cloud?



Also, data has to go over the network. There is always a risk of interception and there is always a risk, if you have a multi-tenant environment, that somehow, through an operation error, some data that should be confidential is exposed to another party. That is definitely a concern for a company.

The final one is reliability -- is the service going to be up enough of the time? Will it be down at moments that are not convenient? I go back to the example of the satellite company. What if the downtime is only five minutes but it's always at 8 p.m., the prime time of satellite television? It may only be five minutes, but it can take you an hour to recover, and it's coming at the most inconvenient moment.

So, this whole reliability concern and disaster recovery is obviously a key element, along with one that comes out more and more in recent workshops: How do I integrate with the cloud?

That is, if I need to make a little change in the business process, with an application in the cloud, can the cloud-service provider make that change? How can I process integration in such a distributed world? That gets a little bit more difficult than in a more traditional enterprise architecture.

Gardner: I hope that a lot of the cloud providers are listening, because it seems like they have quite a workload to overcome some of these issues around security and control. I was a little bit surprised not to hear that issues about standards and neutrality weren't in the top five or six. Do you perceive them as being a little bit lower or do they perhaps fall into these areas of security and confidentiality?

Standards and integration

Comhaire: That's a great question, Dana. They definitely fall in the category of integration, because if the cloud service provider has done everything in a unique way, it's still of great value, a great quality of service, and maybe with a great cost, but it's difficult to integrate the cloud service due to its not following certain standards or being difficult to integrate with the standards that already at the enterprise inside of their data center, then obviously you get this integration difficulty. So, standards and integration difficultly are the same issue expressed in different terms.

Gardner: I wonder if either Ken or Ian has any further points that they've observed in the field about inhibitors. What's holding customers back as they pursue cloud adoption?

Hamilton: I'd just underscore the concerns with security. Particularly in a multi-tenant environment, that seems to be the number one issue that the customers are concerned with. So, first is security, and number two would be reliability. There have been some large failures of some top-named cloud providers. I think they're getting better, but still not quite ready for prime time in some cases.

Gardner: Ian, any thoughts?

Jagger: I'd probably just add to that, not just from the IT side in terms of security, but also from the business side in terms of trade secrets. If you're a provider of cloud services, you know you have that application on there and you have got it out through cloud, which is the great business model you've used to get to market itself. But, how secure are my trade secrets with doing that, and also with respect to addressing to national privacy laws as well. There are some issues that need to be taken into account.

Gardner: Ken, let's go back to you. As customers need to get better informed and to lay out their roadmap, what options do they have? Perhaps you could tell us what would you try to offer them as a way to get started?

But, how secure are my trade secrets with doing that, and also with respect to addressing to national privacy laws as well. There are some issues that need to be taken into account.



Hamilton: Well, the first, and the most important thing, is to make sure that the executive decision makers have a common understanding of what they might want to achieve with cloud. To that end, we've developed a Cloud Discovery Workshop, which is really a way of being able to frame the cloud decision points and to bring the executive decision makers together.

They can have a highly interactive discussion around what key business objectives they might have and what risks and opportunities there might be, many of which we've covered already. It's also to make sure that the organization is crystallized around their specific areas of opportunity and benefit, as well as the specific risks and investments that might be required to focus on this customized view that they have.

This Cloud Discovery Workshop does a great job of engaging the executive team in a very focused amount of time, as little as an afternoon, to be able to walk through the key steps around defining a common definition for their view of cloud. It's not just our view or some other vendor's view, but their definition of cloud and the benefits that they might be able to accrue.

They, specifically drill that down into particular areas with a return on investment (ROI) focus, the infrastructure capabilities that might be required, as well as the service management operational and some of the more esoteric capabilities that go hand in hand, addressing security, privacy, and other areas of risk. It's just making sure that they've got a very clear way of being able to document that, and then move forward into more detailed design, if that's the direction they want to move in.

Gardner: Ewald, you mentioned earlier that those shops that have adopted service performance and IT shared services activities have a bit of a head start, but for them or others? What sort of roadmap process do you offer or recommend?

A better view

Comhaire: From the workshop customers basically get a better view of the strategy they want to go for. We have an initial discussion on the portfolio and we talk also a little bit about the desired state. In the roadmap service, we actually take that to the next level. So we really start off with that desired state.

We have defined the capability model with five levels of capability. We don't want to call it the maturity model, because for every company, the highest maturity isn't necessarily their desired state or their end state. So, it's unfair to name it "maturity." It's more a capability or an implementation model for the cloud. We have five levels of maturity and then six domains of capabilities.

The first thing we determine is an appropriate next state to put as an objective. If you want to become a cloud-service provider, but you haven't done the initial service enabling in your company, you have to go through this intermediate route. You need to say, "First, I will start to build something like an internal shared service that starts bringing technology together and delivering it as a service internally, before I step into the outside market."

So, you first build credibility within your own business, before you actually build credibility in the market. That could be a very logical step on the roadmap. That's Module 1 -- helping customers determine the next desired state on the roadmap and then the end state.

In the case of the cloud, the capacity manager will need to make decisions to acquire new assets to grow the business.



Module 2 is really about, "We know where you want to be now. Let's get more in detail about what projects we need to execute to get to there? What are the gaps? What resources are needed and what skills of resources do you need?"

In the cloud, you may have new roles, like a service delivery manager. The role of a capacity manager is totally different in the cloud than in the non-cloud. In the case of the cloud, the capacity manager will need to make decisions to acquire new assets to grow the business. They can't go to a finance manger. The process would take way too long and the scalability would not be there corresponding to a cloud model.

So, the roles are different. We look at how many people you need, in what roles, with what descriptions and what capabilities. We also look at the timing of what to do when, what the dependencies are of other parts of the organizations, and how to phase that in over a timeline. This is typically what Module 2 is.

Module 3 of the roadmap is then, "We now know where we want to be. We have a good idea of the gaps, the projects that need to be done, the resources, and their skills. How do we sell it to the company and how do we present our conclusion to an executive board or to the executives in the company?"

It's how do we build a business case and how we present that business case financially, value wise, and strategy wise through the corporation. That's really the three components of our roadmap service that then can lead into a cloud design and an implementation activity later on.

Gardner: I imagine of any transformative activity that this is a multi-year process. On that roadmap, do you have any timeline that gives us a rough sense of how quickly these organizations can get to the cloud, or should they not try to rush things?

Keep it simple

Comhaire: One core advice we always give is, "Keep it simple." Rather than bring out a whole portfolio of cloud services, start with one. And, that one service may not have all the functionality that you're dreaming of, but become good at doing a more simplified things faster than trying to overdo it and then end up with a five- or six-year's project, when the whole market will be changed when you can roll out. A lot of the best practice in building the roadmap is to simplify it, so it does not become this four- or five-year project that takes way too long to execute.

Gardner: Well we are just about out of time. I hope we could go around the group and get your impressions on how to get started or any specific areas that people could go to for further information or deeper background. Let's start with you, Ian. Where do people get started, and how should they proceed?

Jagger: I think they should talk to somebody like Ewald. That would be a great start.

But I would probably just like to add, Dana, that what we talked about here is simplifying what cloud is about for the customer, ignoring to an extent how the industry itself is defining cloud definitions. We're looking at this in three ways: do you want to be a service provider, do you want to source cloud, or do you want to build your own shared infrastructure on a private cloud.

And, we've just been talking about roadmap. You actually have the options of being any or all of the above. You can do all three of them. Your business model could be as a cloud service provider. Your sourcing model could be from an outsourcing perspective. What you're not outsourcing, you may actually prefer to build as a shared services internal model.

Keep it simple. Make sure that you've got a very good team understanding of what it is that you want to get done, and leverage experts.



You need to determine what your priorities are. I'm not being facetious when I say talk to Ewald. Ewald's team of guys, are absolutely the correct start point. But your local HP firm sales representative would also be an ideal start.

Gardner: Alright. Ken Hamilton, your ideas about first steps.

Hamilton: I would really underscore what Ian has just said. Keep it simple. Make sure that you've got a very good team understanding of what it is that you want to get done, and leverage experts. We have Ewald and a number of highly qualified people in each of our regions. We just really want to keep it simple, understand what it is that you're trying to get done and leverage the expertise of people who have done it in the past.

Gardner: Quickly back to you, Ewald. I keep hearing the word cloud computing, but if we were to interchange services-oriented architecture (SOA), it would probably just as accurate. Is there a commonalty now between the two?

Comhaire: There have many things in common. I would still argue that there are also some differences. A cloud's architecture will be using an SOA as its underlying fundamental, but the composability of an SOA is not just in how you design the application.

It also has to do with how the services in your cloud can be used together to achieve a bigger integration. That's not just at the level of the application, but it's also the IT services themselves. Are they composable and can they be combined to achieve a bigger thing than just the one service and its value as a standalone item?

Gardner: Thanks so much. We've been discussing some of the ways in which IT organizations and enterprises can move towards cloud and reduce risk, but perhaps accelerate attaining the business benefits of an "Everything as a Service" world. Here to help us with this, we've been joined by Ewald Comhaire, global practice manager, Data Center Transformation at HP Technology Services. Thanks so much, Ewald.

Comhaire: Thanks, Dana.

Gardner: We've also been joined by Ken Hamilton, worldwide director for Cloud Computing Portfolio within HP Technology Services. Thanks to you too, Ken.

Hamilton: Thank you, Dana.

Gardner: And Ian Jagger, worldwide marketing manager for Data Center Services at HP. Thank you, Ian.

Jagger: My pleasure, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Transcript of a sponsored BriefingsDirect podcast discussion on properly developing a strategy for cloud computing adoption. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, October 05, 2009

HP Roadmap Dramatically Reduces Energy Consumption Across Data Centers

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on significantly reducing energy consumption across data centers. Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

In this discussion, we'll examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help us learn more about significantly reducing energy consumption across data centers, we are joined by two experts from HP. Please welcome John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks for joining, John.

John Bennett: Delighted to be here with you today, Dana. Thanks.

Gardner: We are also joined by Ian Jagger, worldwide marketing manager for Data Center Services at HP. Good to have you with us, Ian.

Ian Jagger: And, equally happy to be here, Dana.

Gardner: John Bennett, let's start with you, if you don't mind. Just upfront, are there certain mistakes that energy-minded planners often make, or are there perhaps some common misconceptions that trip up those who are beginning this energy journey?

Bennett: I don't know if there are things that I would characterize as missteps or misconceptions.

We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

Gardner: So, there needs to be some sort of a rationalization for how you approach this, not necessarily on a linear, or even what comes to mind first, but something that adds that strategic benefit.

Cherry picking quick wins

Bennett: I am not even sure I'd characterize it as strategic yet. It's just understanding the business value and cherry picking the quick wins and the highest return ones first.

Gardner: Let's go and do some cherry picking. What are some of the top, must-do items that won't vary very much from data center to data center? Are there certain universals that one needs to consider?

Bennett: We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

On the facility side, and we are probably better off asking Ian to go through this list, running a

There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices.

data center hotter is one of the most obvious ones. I saw a survey just the other day on the Web. It highlighted the fact that people are running their data centers too cold. You should sweat in a data center.

Lot of techniques like hot and cold aisles, looking at how you provide power to the racks and the infrastructure are all things that can be done, but the list is well understood.

Because he is more insightful in this and experienced in this than I am, I'll ask Ian to identify some of the top best practices from the facilities and the infrastructure side, as well.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

But, you're right. There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices. Starting with the easy ones first, such as hot and cold aisles, blanking panels, being tidy with respect to cabling -- have cabling run under the floor, and items like that doesn't, as you alluded to, necessarily provide the best return on investment (ROI), simply because it's a best practice.

Areas of focus

When we undertake energy analysis for our customers, we tend to find the areas of focus would be around air management and environmental control -- very much to the point you mentioned about turning up the heat with respect to handling units -- and also recommendations around electrical systems and uninterruptable power supply (UPS).

Those are the areas of primary focus, and it can drill down from there on a case-by-case basis as to what works for each particular customer.

Gardner: Ian, what causes the variability from site to site? Clearly, there are some common things here that we have talked about, but what is it specifically that differentiates organizations, and they need to be mindful that they can't just follow a routine and expect to get the same results?

Jagger: Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.



But there are instances where, for example, we could say to a customer, "Shut down some of your computer-room air conditioners (CRACs)," and we would identify which ones that should be shut down and how many of them. That clearly would create some significant savings. It doesn't cost anything to do that. Clearly, the ROI is much higher, because there is no capital expenditure that is required to shut down CRACs. That would be one good example.

Another example is placing floor grilles correctly, which would be on anybody's best practice list, and can have a significant impact in the scheme of things. So case-by-case would be the answer, Dana.

Gardner: Given that we have some best practices and some variability from organization to organization, let's look at these four basic areas and then drill down into each one. John Bennett, virtualization. What are the big implications for this? Why is this so important when we think about the total energy picture?

Bennett: If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

High utilization

This is especially a factor for industry standard servers. Historically, whether it's mainframes, HP-UX systems, or HP Integrity NonStop systems, customers are very accustomed to running those at very high utilization rates -- 70, 80, 90 percent plus.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

This can be further improved, as I mentioned earlier, by taking a look at the applications portfolio and doing application modernization, which has two benefits from an energy point of view.

One of them is that it allows the new applications to run on a modern infrastructure environment, so it can participate in the shared environment. Secondly, it allows you to eliminate legacy systems, sometimes very old systems, where very old is anywhere from 5 to 10 years in age or more, and eliminate the power consumption that those systems require.

You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.



Those are the benefits of virtualization, and very clearly anyone dealing with either energy cost issues or energy constraint issues or with a green mandate needs to be looking very seriously at virtualization.

Gardner: What sorts of paybacks are typical with virtualization? Is this a rounding error, a significant change, or is there some significant variability in terms of how it pans out?

Bennett: No, it's significant. It's not a rounding error. We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Gardner: Correct me, if I am wrong, John, but virtualization helps, when we want to whittle down the number of servers while we increase utilization. Doesn't virtualization also help you to expand and scale out as your demands might increase, but at a level consummate with the demand, rather than in large chunks, which may have been the case without virtualization?

Rapid provisioning

Bennett: Oh, yes. I could talk for the rest of this podcast just about virtualization benefits, so don't let me get started. But, very clearly, we see benefits in areas like flexibility and agility, to use the marketing terms, but also the ability to provision resources very quickly. We see customers moving from operational models, where it would take them weeks or months to deploy a new business service, to where they are able to do it in hours.

We see them able to shift resources to where they are needed, when they are needed, in a much more dynamic fashion.

We see improvements in quality of service, as a result of those things. We actually see availability in business continuity benefits from these. So virtualization is -- in my mind, and I have said this before -- as fundamental a data center technology as server storage and networking are.

Gardner: It seems that virtualization is the gift that keeps on giving. Not only do you get a significant reduction in energy cost when you replace older systems and bring in virtualization to increase utilization, but, as you point out, over time, your energy consumption, based on demand, would be low given this ability to provision so effectively and given the ability to get more out of existing systems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.



Bennett: Yes, absolutely.

Gardner: Do you have any examples? Do you have a specific customers or someone that HP has worked with who has instituted virtualization and then has come back with an energy result?

Bennett: We have a number of examples. I'll just share one example here.

The First American Corporation, America's largest provider of business information, had the requirement of being able to better align their resources to business growth in a number of business services, and also were looking to reduce energy costs; two very simple focuses. They implemented a consolidation and virtualization solutions built around HP BladeSystems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.

Gardner: So that spells ROI pretty swiftly?

Bennett: Oh, yes, absolutely.

Gardner: Ian Jagger, let's go to you now on this next major topic -- application modernization. I've also heard this referred to as "cash for clunkers." What do we mean by that?

Investment opportunity


Jagger: There is a parallel that can be drawn there in sense of trading in those clunkers for new cash that can be invested within modernization projects.

John has done a great job talking about virtualization and its parallel, application modernization. I'd like to pull those two together in a certain way. If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

I mentioned the word “converged” earlier. Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.



Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

Gardner: It sounds that with these PODs that are somewhat functionally specific we are almost mapping a service-oriented architecture (SOA) to the data center facility. Is that a fair comparison?

Jagger: Yeah. It's a case of understanding the application catalog, mapping that availability and prioritization requirement, allowing for growth, and allowing for certain levels of redundancy that ultimately you can then build a POD structure within your data center.

You don't need UPS, for example, for everything. You don't need to end redundancy or twice redundancy for all applications. They are not all that critical and therefore why should we treat them as all being critical.

Gardner: A big part of being energy wise is really just being smart about how you understand your requirements and then apply the resources -- not too much, not too little -- sort of the Goldilocks's approach -- just right.

Talk to your utility

Jagger: One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Gardner: Perhaps to reverse engineer from the energy source itself and find the best ways to work with that.

Jagger: Right.

Gardner: John Bennett, is there anything that you would like to add to the topic of application modernization for energy conservation?

Bennett: I'd like to comment a bit about the point made earlier about thinking smarter. What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

. . . In working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value and a lot of significant savings to the organization.



It's not just the applications and the portfolio, which Ian has spoken of, and the infrastructure from a server, storage, and networking perspective. It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

Gardner: Let's move on to our next major category -- data center infrastructure best practices. Again, this is related to these issues of virtualizing and finding the right modernization approaches. Are there ongoing ways in which business as usual in the data center does not work to our advantage when we consider energy? Let's start with you, Ian.

Jagger: As we talked about earlier in terms of best practices, it doesn't necessarily follow that a given best practice returns the best results. I think there has to be an openness on behalf of the company itself on what actions it should take, with respect to driving down energy costs and ensuring solid ROI on any capital expenditure that's required to do that.

Just for example, I mentioned earlier that shutting off CRAC units would be one of the best practices, and turning the temperature up produces certain results.

Payback opportunity

I am thinking of one particular customer, where we suggested that they shut down three CRAC units. Now, that would give them a certain saving, but the cost of some of the work that would have to be done with that equaled the amounts of saving for the first year. So, there is a one-year payback there, and of course the rest is all payback after that point.

But yet, with the same customer, we looked at and advised to say, well, if you use chillers with variable speed compressors, instead of constant speed compressors, then there is certainly a capital requirement there. In the case of this customer, it was about $300,000. But the return on that was $360,000 in one year.

That investment created a larger return on payback than simply shutting down the three CRAC units or indeed the correct placement of floor grilles within the data center.

That was a case not of best practice, but having higher impact than best practice itself. It's not easy for customers to get into the detail of this. This is where expertise comes into it. We need to go beyond the typical list of best practices areas of expertise, and how that expertise can highlight specific areas of payback and ROI and where the business or the IT can actually justify the cost of doing the work.

We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.



Gardner: John Bennett, when it comes to leveraging expertise in order to bring about these efficiencies and make the right choices on how to invest on this ongoing best practices continuum, how does HP enter into this?

What are some ways in which the expertise that you've developed as a company working with many customers over many years, come to bear on some of these new customers or new instances of requirements around energy?

Bennett: We can bring it to bear in a number of ways. For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking that we talked about earlier. We'll ask Ian perhaps to talk a little more about that service in a moment.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.

One element that is considered are the facilities and the data centers themselves. It may very well end up saying, "You need a data-center strategy project. You need to have an analysis done of the applications portfolio to business services to understand how many data centers you have, where they should be, what kinds they should be, what you should do with the data centers you have." Or, it may be that the data centers are not an issue for that particular customer.

Gardner: Another big area where cost plays into these operational budgets, the ongoing budgets, is labor. Is there a relationship between labor in the IT operations and energy? Is there some way for these two very large line items within the IT budget; labor and energy, to play off of one another in some productive manner?

More correlative than causative

Bennett: Well, there is a strong relationship, especially on the infrastructure best practices that impact labor. I would treat it as correlative rather than causative, but as you ruthlessly simplify and standardize your environment, as you move to a common shared infrastructure, you actually can significantly reduce your management costs and begin the process of shifting your IT budget away from management and maintenance.

We see most customers spending 70 percent plus of their operational budget on management and maintenance, the opportunity is flipping that around to where they spend 70 percent of their operational budget on business projects. So, there is a strong set of benefits that come on the people side along with the energy side.

Now, for organizations that have green strategies in addition to having strategies for energy efficiency, one can use IT to help the organization be greener. Some very simple things are to make use of things like HP's Halo rooms for video conferencing and effective meetings without travel and to set up remote access with the corresponding security, so that people can work from home offices or work remotely. A lot of things can be done with green benefits as well as energy benefits.

Gardner: John, just briefly for our listeners, how do you distinguish green from energy conservation, what's the breakdown between them?

Bennett: Well, I am not sure how to characterize the breakdown, but energy is very typically focused either on reducing direct energy cost or reducing energy consumption.

A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.



The broader green benefits will tend to look at areas like sustainability, or having what some people refer to as a neutral carbon footprint. So, if you look at your supply chain backwards and out to your customers, you're not consuming as much of the earth's resources in producing your goods and services, and you are helping your people not consume resources needlessly in delivering the business services that they provide to their customers.

It's about just recycling practices, using recycled goods, packaging efficiency, cutting out paper consumption, changing business processes, and using digitization. A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.

Gardner: Ian, I have heard many times the issue around cable management come up in best practices as well. What's the relationship between energy and cable management in a complex data center environment?

Jagger: Cable management, as you say, is one of those best-practice areas. There are a couple of ways you can look at that. One is from the original plant design with respect to cable ducting and just being accurate with respect to the design of that.

Continuous operation

The second part is running an operation continuously. That operation is dynamic, and so it's never going to stand still. Poor practice starts to take over after a while, and what was once well-designed and perhaps tidy, is no longer the case The cables are going to run here and there, you move this and you move that, and so on. So, that best practice isn't sustained.

You can simply just move back in and just take a fresh look at that and say, "Am I doing what I need to be doing with respect to cabling?" It can have a significant impact, because cabling does interrupt the airflows and air pressures that are running underneath the raised floor.

It's simply a case of getting back to the best practice in terms of how it was originally designed with respect to cable management. There are products in there that we ourselves sell, not just from a design perspective, but racking products that enable that to happen.

Gardner: On the topic of good design, let's move to our fourth major area -- data center building and facility planning. This is for those folks who might not want to, but need to build a whole new data center Or, if they've got an issue where they want to consolidate numerous data centers into a single facility, they might think about moving one or replacing it. A lot of different scenarios can lead to this.

How about starting with you John Bennett? What do you need to consider, when you are going to this whole new facility? I would think the first thing would be where to put the thing -- where is the location.

One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.



Bennett: Actually, before you get to choosing the location, the real first question is, "What is the type of facility do you need?" Ian talked earlier about the hybrid data center concept, but the first questions are how big do you need and what does it have to be to meet and support the needs of the business? That's the first driver.

Then, you can get into questions of location. One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.

There are a lot of customers, for example, who have, and run, data centers downtown in cities like New York, Tokyo and London -- very expensive real estate, but it's important to the business to have their data centers near their corporate offices.

There are companies that run their data centers in remote locations. I know a major bank on the West Coast that runs their primary data centers in Iowa. You can have strategies for having regional data centers. I think that the Oracle data center strategy is to have data centers around the world, in three locations.

HP has its data centers, six data centers, three pairs, located in different parts of the United States, providing worldwide services.

Environmental benefits

You can choose to locate them at places that have environmental benefits, like geothermal benefits. We have a new data center that we are opening up in the UK, which is incredibly energy efficient -- perhaps Ian can talk briefly about that -- taking advantage of local winds. You can take advantages of natural resources from a power point of view.

Gardner: The common philosophy here is to be highly inclusive, bringing in as many aspects of impacting on the decision and long-term efficiency. This is what needs to take place top-down.

Bennett: There are a lot of factors at play. The priorities and weightings of those for individual customers will vary quite significantly. So all of those need to be taken into consideration.

If you are doing a new data center project, chances are this is something that is not just going to your CFO for approval, but probably to the board of directors. It's something that not only is going to have to have a business case in its own right, but have to meet the corporate hurdle rates and be viewed as an opportunity cost for the organization. These are very fundamental business decisions for many customers.

Gardner: Ian Jagger, when we look to these new facilities factoring in a much lower energy footprint that may not have been the case with older facilities might help make that decision and might prompt that board to move sooner than later.

But the play of climate on a data center and energy efficiency is truly significant.



Jagger: Right. Going to the point of actually where to locate it, some companies do have preferences for a data center to be located adjacent to where they are actually conducting business, That doesn't necessarily follow for everyone.

But the play of climate on a data center and energy efficiency is truly significant. We have a model within our Energy Efficiency Analysis that will model for our customers the impact of where a data center could be based, based on climate zone and the relative impact of that.

The statistics are out there in terms of breaking up climate zones into eight regions -- One being the hottest and Eight, the coldest -- and then applying humidity metrics on top of that as well. Just going from one to the other can double or even triple the power usage effectiveness (PUE) rating, which is the usage of energy to power IT over the total end users coming into the data center in the first place. Siting the data center can have an enormous impact on cost and efficiency.

Gardner: I imagine that your thoughts earlier about the PODs and the differentiation within the data center based on certain new high-level requirements. This could also now be brought to bear along with cabling when you are planning a new facility, something that you might not have been able to retrofit into an older one.

Rates of return

Jagger: It's easier for sure to design that into a new facility than it is to retrofit it to an old one, but that doesn't exclude applying the principle to old ones. You would just get to a point where you have a diminishing rate of return in terms of the amount of work that you need to do within an older data center, but certainly you can apply that.

The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.

Gardner: I wonder if there are any political implications around taxation, carbon footprint, and cap-and-trade types of legislation. Any thoughts about factoring location and new data centers in with some of those issues that also relate to energy?

Bennett: Certainly, there are. The UK, for example, already has regulations in place for new buildings that would impact a new data center design project. There is a Data Center Code of Conduct standard in the European Union. It's not regulation yet, but many people think that these will be common in countries around the world -- sooner rather than later.

Gardner: So, yet another indication that getting a full comprehensive perspective when considering these energy issues is very important.

The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.



Let's go back to examples. Do we have some instances where people have created entirely new data centers, done the due diligence, looked at these varieties of perspectives from an energy point of view, and what's been the result? Are there some metrics of success to look at?

Jagger: I think John spoke earlier about a data center we recently built in the UK. The specific site was on the Northeast coast of the UK. I know the area well.

Bennett: It sounds like you might Ian.

Jagger: The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.

We've designed data centers using geothermal activity. Iceland is a classic. Iceland sets itself up, as, "Come to us. Bring your data center to us, because we can take advantage of the geothermals that are in place with respect to that."

Examining all factors

To slightly argue against that, there are a number of data centers being sited in locations like Arizona, where you would consider the cost of cooling the data center to be much greater. Well, the humidity factor plays into that, because there is relatively low humidity there.

The other factor that's coming into that is how you work with the utility company and what the utility rates are? How much you are paying per kilowatt-hour for energy? Still other factors come into play, like general security with respect to the data center.

There are lots of instances where siting the data center is determined by the political considerations that you've talked about. It could be in terms of taking advantage of natural resource. It could be in terms of whether incentives are greater. There are many, many reasons. This would be part of any study, and the modeling that I talked about should take it all into account.

Gardner: So, clearly, there are many, many variables, a great deal of complexity of having a global perspective, and a great deal of experience certainly would come to be very productive when moving into this.

Jagger: Just to give you a specific example, we recently ran an analysis for a company based in Arizona. They were interested in understanding what the peer comparison would be for other companies in a similar climate zone -- how efficient were they in comparison to peers that they could correctly compare themselves to?

You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others.



You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others. What is it that you consider efficient? A data center with a PUE of 2 may be incredibly efficient, compared to a data center with a PUE of 1.4, based on climate location. In other words, the one with a PUE of 2 is actually more efficient than the one with 1.4, because of the influence of climate. If they were peer to peer, it would reflect that.

Gardner: How does an organization begin? We've talked about new data centers, modernization, virtualization, and refining and tuning best practices. Any thoughts on how to get started and where some valuable resources might reside?

Do you have a plan?

Jagger: To me, the only question would be whether you're improving efficiency according to a plan? Do you know the business benefit and the ROI of each improvement that you would like and that you would consider there? If you don't start at that point, you're going to get lost. So what is the plan that you are looking to do, and what is the business benefit that would follow that plan?

Bennett: That plan derives from having a data center strategy, in the positive sense of the word, which is understanding the business strategy and its plans going forward. It's understanding how the business services provided by IT contribute to that business strategy and then aligning the data centers as one of many assets that come into play in delivering those business services.

We see a lot of customers who have either very aged data center strategies or don't have formal data center strategies, and, as a result, aren't able to maximize the value that they deliver to the organization.

At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.



Jagger: You may have noticed this thing throughout this podcast from John and me, one of convergence or synchronization between IT and the facilities. I think that's apparent.

Don't necessarily focus on IT as a starting point. At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.

So, look at some of the other areas beyond IT itself. Those generally would be infrastructure areas.

You've also got to consider how you're going to measure this. How do you look at measuring your efficiency? Some level of energy automation and discovery of measuring energy that should be built in.

Gardner: So. that falls back into the realm of IT financial management.

Jagger: Right.

Gardner: We have been discussing ways in which you can begin realistically reducing energy consumption across data centers -- old data centers and new data centers -- and applying good practices, regardless of their age or location.

Helping us understand how to move in the conservative use of energy, we have been joined by John Bennett, worldwide director for Data Center Transformation Solutions at HP. Thank you, John.

Bennett: My pleasure Dana. Thank you.

Gardner: We've also been joined by Ian Jagger, worldwide marketing Manager for Data Center Services. Thank you, Ian.

Jagger: You are very welcome Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.