Showing posts with label energy management. Show all posts
Showing posts with label energy management. Show all posts

Monday, October 05, 2009

HP Roadmap Dramatically Reduces Energy Consumption Across Data Centers

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency.

Listen to the podcast. Find it on iTunes/iPod and Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on significantly reducing energy consumption across data centers. Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

In this discussion, we'll examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help us learn more about significantly reducing energy consumption across data centers, we are joined by two experts from HP. Please welcome John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks for joining, John.

John Bennett: Delighted to be here with you today, Dana. Thanks.

Gardner: We are also joined by Ian Jagger, worldwide marketing manager for Data Center Services at HP. Good to have you with us, Ian.

Ian Jagger: And, equally happy to be here, Dana.

Gardner: John Bennett, let's start with you, if you don't mind. Just upfront, are there certain mistakes that energy-minded planners often make, or are there perhaps some common misconceptions that trip up those who are beginning this energy journey?

Bennett: I don't know if there are things that I would characterize as missteps or misconceptions.

We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

Gardner: So, there needs to be some sort of a rationalization for how you approach this, not necessarily on a linear, or even what comes to mind first, but something that adds that strategic benefit.

Cherry picking quick wins

Bennett: I am not even sure I'd characterize it as strategic yet. It's just understanding the business value and cherry picking the quick wins and the highest return ones first.

Gardner: Let's go and do some cherry picking. What are some of the top, must-do items that won't vary very much from data center to data center? Are there certain universals that one needs to consider?

Bennett: We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

On the facility side, and we are probably better off asking Ian to go through this list, running a

There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices.

data center hotter is one of the most obvious ones. I saw a survey just the other day on the Web. It highlighted the fact that people are running their data centers too cold. You should sweat in a data center.

Lot of techniques like hot and cold aisles, looking at how you provide power to the racks and the infrastructure are all things that can be done, but the list is well understood.

Because he is more insightful in this and experienced in this than I am, I'll ask Ian to identify some of the top best practices from the facilities and the infrastructure side, as well.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

But, you're right. There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices. Starting with the easy ones first, such as hot and cold aisles, blanking panels, being tidy with respect to cabling -- have cabling run under the floor, and items like that doesn't, as you alluded to, necessarily provide the best return on investment (ROI), simply because it's a best practice.

Areas of focus

When we undertake energy analysis for our customers, we tend to find the areas of focus would be around air management and environmental control -- very much to the point you mentioned about turning up the heat with respect to handling units -- and also recommendations around electrical systems and uninterruptable power supply (UPS).

Those are the areas of primary focus, and it can drill down from there on a case-by-case basis as to what works for each particular customer.

Gardner: Ian, what causes the variability from site to site? Clearly, there are some common things here that we have talked about, but what is it specifically that differentiates organizations, and they need to be mindful that they can't just follow a routine and expect to get the same results?

Jagger: Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

But there are instances where, for example, we could say to a customer, "Shut down some of your computer-room air conditioners (CRACs)," and we would identify which ones that should be shut down and how many of them. That clearly would create some significant savings. It doesn't cost anything to do that. Clearly, the ROI is much higher, because there is no capital expenditure that is required to shut down CRACs. That would be one good example.

Another example is placing floor grilles correctly, which would be on anybody's best practice list, and can have a significant impact in the scheme of things. So case-by-case would be the answer, Dana.

Gardner: Given that we have some best practices and some variability from organization to organization, let's look at these four basic areas and then drill down into each one. John Bennett, virtualization. What are the big implications for this? Why is this so important when we think about the total energy picture?

Bennett: If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

High utilization

This is especially a factor for industry standard servers. Historically, whether it's mainframes, HP-UX systems, or HP Integrity NonStop systems, customers are very accustomed to running those at very high utilization rates -- 70, 80, 90 percent plus.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

This can be further improved, as I mentioned earlier, by taking a look at the applications portfolio and doing application modernization, which has two benefits from an energy point of view.

One of them is that it allows the new applications to run on a modern infrastructure environment, so it can participate in the shared environment. Secondly, it allows you to eliminate legacy systems, sometimes very old systems, where very old is anywhere from 5 to 10 years in age or more, and eliminate the power consumption that those systems require.

You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

Those are the benefits of virtualization, and very clearly anyone dealing with either energy cost issues or energy constraint issues or with a green mandate needs to be looking very seriously at virtualization.

Gardner: What sorts of paybacks are typical with virtualization? Is this a rounding error, a significant change, or is there some significant variability in terms of how it pans out?

Bennett: No, it's significant. It's not a rounding error. We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Gardner: Correct me, if I am wrong, John, but virtualization helps, when we want to whittle down the number of servers while we increase utilization. Doesn't virtualization also help you to expand and scale out as your demands might increase, but at a level consummate with the demand, rather than in large chunks, which may have been the case without virtualization?

Rapid provisioning

Bennett: Oh, yes. I could talk for the rest of this podcast just about virtualization benefits, so don't let me get started. But, very clearly, we see benefits in areas like flexibility and agility, to use the marketing terms, but also the ability to provision resources very quickly. We see customers moving from operational models, where it would take them weeks or months to deploy a new business service, to where they are able to do it in hours.

We see them able to shift resources to where they are needed, when they are needed, in a much more dynamic fashion.

We see improvements in quality of service, as a result of those things. We actually see availability in business continuity benefits from these. So virtualization is -- in my mind, and I have said this before -- as fundamental a data center technology as server storage and networking are.

Gardner: It seems that virtualization is the gift that keeps on giving. Not only do you get a significant reduction in energy cost when you replace older systems and bring in virtualization to increase utilization, but, as you point out, over time, your energy consumption, based on demand, would be low given this ability to provision so effectively and given the ability to get more out of existing systems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.

Bennett: Yes, absolutely.

Gardner: Do you have any examples? Do you have a specific customers or someone that HP has worked with who has instituted virtualization and then has come back with an energy result?

Bennett: We have a number of examples. I'll just share one example here.

The First American Corporation, America's largest provider of business information, had the requirement of being able to better align their resources to business growth in a number of business services, and also were looking to reduce energy costs; two very simple focuses. They implemented a consolidation and virtualization solutions built around HP BladeSystems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.

Gardner: So that spells ROI pretty swiftly?

Bennett: Oh, yes, absolutely.

Gardner: Ian Jagger, let's go to you now on this next major topic -- application modernization. I've also heard this referred to as "cash for clunkers." What do we mean by that?

Investment opportunity

Jagger: There is a parallel that can be drawn there in sense of trading in those clunkers for new cash that can be invested within modernization projects.

John has done a great job talking about virtualization and its parallel, application modernization. I'd like to pull those two together in a certain way. If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

I mentioned the word “converged” earlier. Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

Gardner: It sounds that with these PODs that are somewhat functionally specific we are almost mapping a service-oriented architecture (SOA) to the data center facility. Is that a fair comparison?

Jagger: Yeah. It's a case of understanding the application catalog, mapping that availability and prioritization requirement, allowing for growth, and allowing for certain levels of redundancy that ultimately you can then build a POD structure within your data center.

You don't need UPS, for example, for everything. You don't need to end redundancy or twice redundancy for all applications. They are not all that critical and therefore why should we treat them as all being critical.

Gardner: A big part of being energy wise is really just being smart about how you understand your requirements and then apply the resources -- not too much, not too little -- sort of the Goldilocks's approach -- just right.

Talk to your utility

Jagger: One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Gardner: Perhaps to reverse engineer from the energy source itself and find the best ways to work with that.

Jagger: Right.

Gardner: John Bennett, is there anything that you would like to add to the topic of application modernization for energy conservation?

Bennett: I'd like to comment a bit about the point made earlier about thinking smarter. What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

. . . In working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value and a lot of significant savings to the organization.

It's not just the applications and the portfolio, which Ian has spoken of, and the infrastructure from a server, storage, and networking perspective. It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

Gardner: Let's move on to our next major category -- data center infrastructure best practices. Again, this is related to these issues of virtualizing and finding the right modernization approaches. Are there ongoing ways in which business as usual in the data center does not work to our advantage when we consider energy? Let's start with you, Ian.

Jagger: As we talked about earlier in terms of best practices, it doesn't necessarily follow that a given best practice returns the best results. I think there has to be an openness on behalf of the company itself on what actions it should take, with respect to driving down energy costs and ensuring solid ROI on any capital expenditure that's required to do that.

Just for example, I mentioned earlier that shutting off CRAC units would be one of the best practices, and turning the temperature up produces certain results.

Payback opportunity

I am thinking of one particular customer, where we suggested that they shut down three CRAC units. Now, that would give them a certain saving, but the cost of some of the work that would have to be done with that equaled the amounts of saving for the first year. So, there is a one-year payback there, and of course the rest is all payback after that point.

But yet, with the same customer, we looked at and advised to say, well, if you use chillers with variable speed compressors, instead of constant speed compressors, then there is certainly a capital requirement there. In the case of this customer, it was about $300,000. But the return on that was $360,000 in one year.

That investment created a larger return on payback than simply shutting down the three CRAC units or indeed the correct placement of floor grilles within the data center.

That was a case not of best practice, but having higher impact than best practice itself. It's not easy for customers to get into the detail of this. This is where expertise comes into it. We need to go beyond the typical list of best practices areas of expertise, and how that expertise can highlight specific areas of payback and ROI and where the business or the IT can actually justify the cost of doing the work.

We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.

Gardner: John Bennett, when it comes to leveraging expertise in order to bring about these efficiencies and make the right choices on how to invest on this ongoing best practices continuum, how does HP enter into this?

What are some ways in which the expertise that you've developed as a company working with many customers over many years, come to bear on some of these new customers or new instances of requirements around energy?

Bennett: We can bring it to bear in a number of ways. For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking that we talked about earlier. We'll ask Ian perhaps to talk a little more about that service in a moment.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.

One element that is considered are the facilities and the data centers themselves. It may very well end up saying, "You need a data-center strategy project. You need to have an analysis done of the applications portfolio to business services to understand how many data centers you have, where they should be, what kinds they should be, what you should do with the data centers you have." Or, it may be that the data centers are not an issue for that particular customer.

Gardner: Another big area where cost plays into these operational budgets, the ongoing budgets, is labor. Is there a relationship between labor in the IT operations and energy? Is there some way for these two very large line items within the IT budget; labor and energy, to play off of one another in some productive manner?

More correlative than causative

Bennett: Well, there is a strong relationship, especially on the infrastructure best practices that impact labor. I would treat it as correlative rather than causative, but as you ruthlessly simplify and standardize your environment, as you move to a common shared infrastructure, you actually can significantly reduce your management costs and begin the process of shifting your IT budget away from management and maintenance.

We see most customers spending 70 percent plus of their operational budget on management and maintenance, the opportunity is flipping that around to where they spend 70 percent of their operational budget on business projects. So, there is a strong set of benefits that come on the people side along with the energy side.

Now, for organizations that have green strategies in addition to having strategies for energy efficiency, one can use IT to help the organization be greener. Some very simple things are to make use of things like HP's Halo rooms for video conferencing and effective meetings without travel and to set up remote access with the corresponding security, so that people can work from home offices or work remotely. A lot of things can be done with green benefits as well as energy benefits.

Gardner: John, just briefly for our listeners, how do you distinguish green from energy conservation, what's the breakdown between them?

Bennett: Well, I am not sure how to characterize the breakdown, but energy is very typically focused either on reducing direct energy cost or reducing energy consumption.

A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.

The broader green benefits will tend to look at areas like sustainability, or having what some people refer to as a neutral carbon footprint. So, if you look at your supply chain backwards and out to your customers, you're not consuming as much of the earth's resources in producing your goods and services, and you are helping your people not consume resources needlessly in delivering the business services that they provide to their customers.

It's about just recycling practices, using recycled goods, packaging efficiency, cutting out paper consumption, changing business processes, and using digitization. A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.

Gardner: Ian, I have heard many times the issue around cable management come up in best practices as well. What's the relationship between energy and cable management in a complex data center environment?

Jagger: Cable management, as you say, is one of those best-practice areas. There are a couple of ways you can look at that. One is from the original plant design with respect to cable ducting and just being accurate with respect to the design of that.

Continuous operation

The second part is running an operation continuously. That operation is dynamic, and so it's never going to stand still. Poor practice starts to take over after a while, and what was once well-designed and perhaps tidy, is no longer the case The cables are going to run here and there, you move this and you move that, and so on. So, that best practice isn't sustained.

You can simply just move back in and just take a fresh look at that and say, "Am I doing what I need to be doing with respect to cabling?" It can have a significant impact, because cabling does interrupt the airflows and air pressures that are running underneath the raised floor.

It's simply a case of getting back to the best practice in terms of how it was originally designed with respect to cable management. There are products in there that we ourselves sell, not just from a design perspective, but racking products that enable that to happen.

Gardner: On the topic of good design, let's move to our fourth major area -- data center building and facility planning. This is for those folks who might not want to, but need to build a whole new data center Or, if they've got an issue where they want to consolidate numerous data centers into a single facility, they might think about moving one or replacing it. A lot of different scenarios can lead to this.

How about starting with you John Bennett? What do you need to consider, when you are going to this whole new facility? I would think the first thing would be where to put the thing -- where is the location.

One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.

Bennett: Actually, before you get to choosing the location, the real first question is, "What is the type of facility do you need?" Ian talked earlier about the hybrid data center concept, but the first questions are how big do you need and what does it have to be to meet and support the needs of the business? That's the first driver.

Then, you can get into questions of location. One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.

There are a lot of customers, for example, who have, and run, data centers downtown in cities like New York, Tokyo and London -- very expensive real estate, but it's important to the business to have their data centers near their corporate offices.

There are companies that run their data centers in remote locations. I know a major bank on the West Coast that runs their primary data centers in Iowa. You can have strategies for having regional data centers. I think that the Oracle data center strategy is to have data centers around the world, in three locations.

HP has its data centers, six data centers, three pairs, located in different parts of the United States, providing worldwide services.

Environmental benefits

You can choose to locate them at places that have environmental benefits, like geothermal benefits. We have a new data center that we are opening up in the UK, which is incredibly energy efficient -- perhaps Ian can talk briefly about that -- taking advantage of local winds. You can take advantages of natural resources from a power point of view.

Gardner: The common philosophy here is to be highly inclusive, bringing in as many aspects of impacting on the decision and long-term efficiency. This is what needs to take place top-down.

Bennett: There are a lot of factors at play. The priorities and weightings of those for individual customers will vary quite significantly. So all of those need to be taken into consideration.

If you are doing a new data center project, chances are this is something that is not just going to your CFO for approval, but probably to the board of directors. It's something that not only is going to have to have a business case in its own right, but have to meet the corporate hurdle rates and be viewed as an opportunity cost for the organization. These are very fundamental business decisions for many customers.

Gardner: Ian Jagger, when we look to these new facilities factoring in a much lower energy footprint that may not have been the case with older facilities might help make that decision and might prompt that board to move sooner than later.

But the play of climate on a data center and energy efficiency is truly significant.

Jagger: Right. Going to the point of actually where to locate it, some companies do have preferences for a data center to be located adjacent to where they are actually conducting business, That doesn't necessarily follow for everyone.

But the play of climate on a data center and energy efficiency is truly significant. We have a model within our Energy Efficiency Analysis that will model for our customers the impact of where a data center could be based, based on climate zone and the relative impact of that.

The statistics are out there in terms of breaking up climate zones into eight regions -- One being the hottest and Eight, the coldest -- and then applying humidity metrics on top of that as well. Just going from one to the other can double or even triple the power usage effectiveness (PUE) rating, which is the usage of energy to power IT over the total end users coming into the data center in the first place. Siting the data center can have an enormous impact on cost and efficiency.

Gardner: I imagine that your thoughts earlier about the PODs and the differentiation within the data center based on certain new high-level requirements. This could also now be brought to bear along with cabling when you are planning a new facility, something that you might not have been able to retrofit into an older one.

Rates of return

Jagger: It's easier for sure to design that into a new facility than it is to retrofit it to an old one, but that doesn't exclude applying the principle to old ones. You would just get to a point where you have a diminishing rate of return in terms of the amount of work that you need to do within an older data center, but certainly you can apply that.

The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.

Gardner: I wonder if there are any political implications around taxation, carbon footprint, and cap-and-trade types of legislation. Any thoughts about factoring location and new data centers in with some of those issues that also relate to energy?

Bennett: Certainly, there are. The UK, for example, already has regulations in place for new buildings that would impact a new data center design project. There is a Data Center Code of Conduct standard in the European Union. It's not regulation yet, but many people think that these will be common in countries around the world -- sooner rather than later.

Gardner: So, yet another indication that getting a full comprehensive perspective when considering these energy issues is very important.

The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.

Let's go back to examples. Do we have some instances where people have created entirely new data centers, done the due diligence, looked at these varieties of perspectives from an energy point of view, and what's been the result? Are there some metrics of success to look at?

Jagger: I think John spoke earlier about a data center we recently built in the UK. The specific site was on the Northeast coast of the UK. I know the area well.

Bennett: It sounds like you might Ian.

Jagger: The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.

We've designed data centers using geothermal activity. Iceland is a classic. Iceland sets itself up, as, "Come to us. Bring your data center to us, because we can take advantage of the geothermals that are in place with respect to that."

Examining all factors

To slightly argue against that, there are a number of data centers being sited in locations like Arizona, where you would consider the cost of cooling the data center to be much greater. Well, the humidity factor plays into that, because there is relatively low humidity there.

The other factor that's coming into that is how you work with the utility company and what the utility rates are? How much you are paying per kilowatt-hour for energy? Still other factors come into play, like general security with respect to the data center.

There are lots of instances where siting the data center is determined by the political considerations that you've talked about. It could be in terms of taking advantage of natural resource. It could be in terms of whether incentives are greater. There are many, many reasons. This would be part of any study, and the modeling that I talked about should take it all into account.

Gardner: So, clearly, there are many, many variables, a great deal of complexity of having a global perspective, and a great deal of experience certainly would come to be very productive when moving into this.

Jagger: Just to give you a specific example, we recently ran an analysis for a company based in Arizona. They were interested in understanding what the peer comparison would be for other companies in a similar climate zone -- how efficient were they in comparison to peers that they could correctly compare themselves to?

You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others.

You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others. What is it that you consider efficient? A data center with a PUE of 2 may be incredibly efficient, compared to a data center with a PUE of 1.4, based on climate location. In other words, the one with a PUE of 2 is actually more efficient than the one with 1.4, because of the influence of climate. If they were peer to peer, it would reflect that.

Gardner: How does an organization begin? We've talked about new data centers, modernization, virtualization, and refining and tuning best practices. Any thoughts on how to get started and where some valuable resources might reside?

Do you have a plan?

Jagger: To me, the only question would be whether you're improving efficiency according to a plan? Do you know the business benefit and the ROI of each improvement that you would like and that you would consider there? If you don't start at that point, you're going to get lost. So what is the plan that you are looking to do, and what is the business benefit that would follow that plan?

Bennett: That plan derives from having a data center strategy, in the positive sense of the word, which is understanding the business strategy and its plans going forward. It's understanding how the business services provided by IT contribute to that business strategy and then aligning the data centers as one of many assets that come into play in delivering those business services.

We see a lot of customers who have either very aged data center strategies or don't have formal data center strategies, and, as a result, aren't able to maximize the value that they deliver to the organization.

At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.

Jagger: You may have noticed this thing throughout this podcast from John and me, one of convergence or synchronization between IT and the facilities. I think that's apparent.

Don't necessarily focus on IT as a starting point. At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.

So, look at some of the other areas beyond IT itself. Those generally would be infrastructure areas.

You've also got to consider how you're going to measure this. How do you look at measuring your efficiency? Some level of energy automation and discovery of measuring energy that should be built in.

Gardner: So. that falls back into the realm of IT financial management.

Jagger: Right.

Gardner: We have been discussing ways in which you can begin realistically reducing energy consumption across data centers -- old data centers and new data centers -- and applying good practices, regardless of their age or location.

Helping us understand how to move in the conservative use of energy, we have been joined by John Bennett, worldwide director for Data Center Transformation Solutions at HP. Thank you, John.

Bennett: My pleasure Dana. Thank you.

Gardner: We've also been joined by Ian Jagger, worldwide marketing Manager for Data Center Services. Thank you, Ian.

Jagger: You are very welcome Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, November 10, 2008

Solving IT Energy Use Issues Requires Holistic Approach to Efficiency Planning and Management

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on the critical and global problem of energy management for IT operations and data centers.

We will take a look at energy demand, supply, costs, and ways to develop a complete management perspective across the entire IT energy equation. The goal is to find innovative means to conservation so that existing facilities don't need to be expanded or replaced.

Good energy management is not as simple as just less hardware or better cooling, but it really requires an enterprise-by-enterprise examination of the "many sins" of energy and resources misuse.

In order to put into practice longer-term benefits, behaviors, and measurements, the whole picture needs to be taken into consideration. The goal, of course, is to promote a low-risk matching of energy supply and cost with the lowest IT energy demand possible.

To help us examine these important topics we’re joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group. Welcome to our podcast, Ian.

Ian Jagger: Thank you, happy to be here.

Gardner: We’re also joined by Andrew Fisher. He is the manager of technology strategy in the Industry Standard Services group at HP. Welcome, Andrew.

Andrew Fisher: Thank you, very much.

Gardner: Let's take a look first at the broad picture of larger trends in this whole energy equation. As I say, it’s not simple. There are a lot of moving parts, and there are a lot of mega tends and external factors involved as well.

I suppose the first thing to look at is capacity. I’d like to direct this to Ian. How critical is the situation now where large enterprises with vast data centers are actually facing an energy crisis?

Jagger: I think it's quite critical Dana. Data centers typically were not designed for the computing loads that are available to us today and they have been caught out. Enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Gardner: Now, at the most general level, is this a case where there is not enough electricity available or that the growth and demand of electricity is just growing so quickly, or both?

Jagger: I think it's both, and there is also a third level, which is how adequate is the cooling design within the data center itself. So, it is a question of how much power is available, of how much can be drawn into the data center, what is the capacity of the data center, and as I said, how that is cooled.

Gardner: We are also, of course, involving green concerns. There are issues around carbon and pollution, and mandates around these issues. We are also faced with regulatory issues and compliance that are of a separate nature, and many organizations are behaving more like service bureaus, where they have service level agreements.

So there is not too much wiggle room in terms of what needs to be adhered to from compliance and/or service levels. What are the variables that companies need to first start focusing on in order to better execute their management of energy?

Fisher: That's a good question. One of the most important things to understand is how they have allocated power within that data center. There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.

Gardner: This does vary from region to region, and HP being a global company, perhaps we should also take a look at the fact that in the United States, for example, there are limitations from the grid. The capacity of moving energy, even if it can be generated, is an issue, and in the U.K., apparently in the London area at least, there’s been somewhat of a lockdown in terms of use restrictions around the Olympics.

Ian, perhaps you could fill us in a little bit on some of the regional impacts and how this is supercritical perhaps in some areas more than others.

Jagger: I think you have just got it with the example you have used. It does vary region to region, depending on the capacity of the grid, the ability to distribute it along the grid and how that impacts customers geographically. It's not just about power distribution and generation, but it's also about the nascent situation with respect to compliance.

In Europe, we are now seeing countries, particularly the U.K., who have taken the lead in terms of carbon reduction. Legislation is coming on line, kicking in from 2010, but compliance requirements from 2009, where the top 5,000 companies or so, companies that use a given volume of energy or a value of energy, have to justify that usage in terms of purchasing carbon credits which are set against them.

Each of those companies -- and this includes HP U.K. -- need to establish what the energy usage is and show the roadmap -- how they can reduce that year over year towards the legislation that's in play there. It's only around the corner before that's applied in the U.S. too.

Gardner: Now, we recognize that this is a large problem. Many components -- I have heard the phrase “many sins” -- are involved. I wonder if either of you, or perhaps both, could fill us in a little bit about what are the types of past behaviors, approaches, mentalities, and philosophies about energy that need to be reexamined in order to get closer to where we need to go.

Jagger: I think the contrast among the silos between facilities and real estate and IT are based in the contradiction between cost and availability. You mentioned service levels earlier. From an IT perspective, that’s service level agreements to the business in terms of availability, the uptime of equipment. But, from the real estate perspective, the facility perspective, it's about cost control and CAPEX and OPEX with respect to the facility itself.

They have tended to operate in independent silos, but now the general problem we have, which is overriding both of those departments, is the cost of energy. Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Gardner: How about it, Andy? What sort of sins unfortunately have people overlooked as a result of lower energy cost in the past, but that really can't be overlooked now?

Fisher: First of all, it's a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve. So, there is rarely a single silver bullet to solve this complex problem.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself. Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs.

One of the biggest issues out there is that the industry, by and large, drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money. So we have at HP a wide range of capabilities, including our EYP Mission Critical Facilities Services to help you analyze those operational issues as well as structural ones, and make recommendations, in addition to products that are more efficient as well.

Gardner: You raise a couple of interesting points. It's hard to fix something that you can't measure. What are the basic measurement guidelines for energy use?

I have heard of Defense Council on Integrity and Efficiency (DCIE). There is also a Power Usage Effectiveness (PUE). How does a large organization start to get a handle on this? As you say, or it has been mentioned, it's a siloed problem in the past, now it needs to be tackled head on?

Jagger: You have touched on the principal benchmarks that go through the industry there -- the PUE and the Infrastructure Efficiency Ratio, which is the inverse of the PUE. Put very simply, the PUE would be the total power coming into the data center over the amount of power required for computing purposes. So how efficient is that? How efficient is the data center and service of overall power that is required for computing?

In other words, if you need one kilowatt for computing, and your PUE is two-and-a-half, than you need to be bringing 2.5 kilowatts to the wall to be able to run those computers.

They are not perfect, and there are industry bodies that are looking to drive greater elements of perfection out of this. So for example, PUE is a Green Grid Rating System that is generally used, but Green Grid themselves are looking to migrate through the inverse ratio of the data center infrastructure and efficiency ratio, and use that going forward before they can develop the next level.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy.

Gardner: That dovetails, of course, with a number of other initiatives we have underway, such as virtualization, application modernization, winnowing out apps that aren't being used very much. Service-oriented architecture (SOA) encourages reuse and making sure that common services are supported efficiently.

There is also data center unification and modernization of hardware. All these things come together and ultimately increase utilization, which then changes the energy equation.

The question is how do we make these things work in concert? How is there some coordination between getting the right mix on energy along with some of these other initiatives? Why don't we start with Ian on that?

Jagger: They feed off each other. If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria. You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management.

But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well. I guess Andy would have some thoughts there.

Fisher: There is a wide range of opportunities. Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So, simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Gardner: Once these organizations start hitting the wall on energy, it behooves them to look at some of these other initiatives, rather than just saying, “Wow, we need another data center at 10, 20, maybe 100 million dollars.” Is that more the philosophy here -- be smart not big?

Fisher: Absolutely. There is a substantial opportunity to extend the life of your data center, and I recommend that you give HP a call and talk to us here. We have a wide range of things that we can help with.

Ian can talk to the services here in a second, but from a product perspective, we’re bringing to market new capabilities in terms of efficiency of the platforms to help you reduce that total energy consumption of the IT equipment itself. We’re also working on unique ways of reclaiming existing capacity. Instead of having to build another 50 or 100-million-dollar data center, you can live longer in the data center that you have.

Gardner: I suppose one of the fundamental shifts recently with the cost of energy going up considerably is that the return on investment (ROI) equation shifts as well. If I were selling systems I need to know, given the harsh economic climate, that I have a good ROI investment story -- that if you invest $10, you can save $15 over X amount of time. The energy factor now plays a much larger role in that.

Perhaps, Andy, you could tell us a little bit about how the cost of energy, instead of an afterthought, is now a fore thought, when it comes to these -- whether it’s worth these modernization efforts.

Fisher: We look at it both from an OPEX, or your monthly cost of electricity -- and that’s rising rapidly, as the cost of energy goes up -- as well as from a CAPEX perspective, with your investment in your data center.

The first thing is to optimize your CAPEX investment, the money you have already sunk into your data center. You want to make sure that from an investment perspective you don't have to lay out another huge chunk of money to build another data center. So, number one, we want to optimize on the CAPEX side and make sure that you are using what you have most effectively.

But, from an operational cost perspective, it's really about reducing your total energy consumption. You can approach that initially from optimizing the energy use of your IT equipment itself, because that is core to the PUE calculation that we talked about.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

Otherwise, there are opportunities… We’ve introduced products that help you optimize your cooling, which typically can be up to 50 percent or more of your total energy budget. So by making sure that you fine tune your cooling to meet your actual demand of your IT you can make substantial reductions on your monthly electric bill.

Gardner: Now, how does the Adaptive Infrastructure relate to this as well? It seems that would also be a factor in some of these equations?

Fisher: We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

Gardner: Let's go back to Ian. It seems that, as with many areas like manufacturing or application development, the history has been that you build it and then you throw it over the wall and someone has to put it into production or build it.

I expect that maybe data centers have had a similar effect when it comes to energy. We set up requirements. We build based on performance requirements. And then, oh, by the way, energy issues come as an afterthought.

Is that true and is that the outmoded method, and are we now, in a sense, building for energy conservation from the get-go? Has it become more of a city- or town-planner mentality, rather than simply an architect approach. What's the mindset shift that's taking place?

Jagger: That's a good question. I think you have to address it at all the levels you talked about. At the company level or the enterprise level, you are absolutely right. That has been the mentality or the approach, we need a data center, and we base it where we are. Nothing else matters. Base it adjacent to us.

Energy costs or supply have not been a consideration. Now they are. That's on the basis that you don't have any other complexities coming at you. But, if you are just looking at the strategy for your data centers in terms of business growth and your capacity, storage, and availability requirements that you have going forward, and you do the math, you can understand the size of the data center you need and how that works with respect to virtualization strategies and so on.

On top of that, we have the latest complexities, where you simply don't have the forward view on things. In just the last few days we’ve seen, for example, Wells Fargo buying Wachovia. I’m not sure how many data centers are within those two organizations, but you can bet they are in the scores. Suddenly, we have real estate and IT managers who are scratching their heads thinking, “How on earth do we bring all this together. There are different approaches now being taken at the enterprise level.

At the architects’ level, it would be irresponsible for an architect today not to build energy efficiency into a green field building or any building, not just a data center. It’s pretty much been established that it just makes sense if you are designing a new building to be building energy efficiency into it, because your operating costs will far outweigh the capital expenditure on those building rather quickly.

I’m not sure how a company like HP can influence at the planning level, but where we can influence is at the industry level and at the governmental level. We have experts within the company who sit on think tanks and governance boards. We advise bodies like the EPA. We sit with the leading organizations in energy building design, and discuss how governance with respect to green building design can be built and can be moved forward within the market.

That's how we can start to influence at the industry level in terms of having industry standards created, because if the industry doesn't create them itself, then governmental bodies will do it.

Gardner: It also seems that because it's so difficult to predict all the variables, that a need for modularity has emerged in the data center design, so that the end result can be amended and adjusted without all the other parts being interconnected and brittle. It’s similar to software, where you would want to have modularity in software, so you gain flexibility and it’s not too brittle. Can you explain more deeply how that relates to best energy management practice?

Jagger: The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

What has gone on in the past and today is that as an enterprise you may have a requirement for a Tier 4 level of structure, with respect to the data center, which is putting out at 100 watts per square foot, for example. Let’s say, for the sake of argument, that's a 100,000-square-foot data center, but you don't need all that data center infrastructure at a Tier 4 topology.

If you look at how you’re going to structure your virtualization program, you may only need 50 percent of it at Tier 4 for high density computing, and the rest of it can be at a Tier 2 level.

If that were the situation, you would be clearing roughly 25 percent of your capital costs on building that data center. Just doing simple math, if you are looking at 100,000 square feet, that's in the region of $40 to $50 million. So, there are some clear consequences of moving to a hybrid tiered or a modular model.

Gardner: Are there some examples out there that you can give us? It would be great if you could name some companies, or at least give us use-case scenarios where organizations have adjusted, adopted some of these practices, implemented some of these standards, used common measurement practices, and have resisted having to spend $40 million on CAPEX, but also perhaps utilizing their existing resources even better.

Jagger: I think HP is the biggest example. We are the biggest example of designing modularity into our own data centers.

Beyond HP, you could look at supercomputing centers, high density computing -- the Internet service providers, the Googles of this world, and Microsoft themselves. The companies that require high-level resolve, high density and supercomputing typically are moving in this direction. We are pioneering this with our in-house capabilities. We are at the leading edge of this level of innovation.

Gardner: Let's take a look forward a little bit. What can we expect? Obviously, this makes more sense over time. Green issues are going to become more prevalent. Carbon is going to become more regulated. Costs are going to become prohibitive for waste, and the amount of data moving around increases all the time.

Perhaps you can explain the roadmap, the future, some of the concepts around optimizing data centers -- without pre-announcing things, but at least, give us a sense of what's coming.

Fisher: How about if I talk to that one first. One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

In the HP implementation, it's a very simple kind of layout. You just have a single row of 50 U racks. I believe there’s something like 22 of them in this 40-foot container. There’s a single hot aisle and a single cool aisle, with overhead cooling that takes the exhaust hot air from the back and cools it and delivers it to the front.

Using the HP POD you can install any standard equipment into the 19-inch racks and build out a very efficient data center that has a very low PUE or a leading PUE, from a cooling perspective. So that's yet another option in the HP side.

From the product side of HP here, one of the biggest things we’re seeing is that power and cooling capacity is allocated by facilities in a very conservative manner. It's hard to understand exactly how much energy is required for each individual server or blade enclosure. So, there’s typically quite a bit of a conservative reserve that is allocated on top of what's probably actually being consumed.

In fact, if it's in the purview of the facilities team to allocate that power, they would treat it as any piece of electrical equipment and they would just look at what the max power rating or requirement is for the piece of equipment. What we’re seeing is that this can actually overstate the power requirement by up to three times what is actually needed.

So, there’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

Very soon, you’re going to be hearing some exciting news from HP about how we’re going to provide the opportunity for fine-tuned control of exactly how much power the servers in the IT racks are going to actually use.

Gardner: So, not only are we moving toward modularity at a number of levels, we’re bringing more intelligence to bear on the problem?

Fisher: Yes. A key to addressing this problem is to have accurate measurement and the ability to have predictability and control of the actual power consumption of the core IT equipment that the whole infrastructure is supporting.

Gardner: Alright. How about a roadmap, from a strategic point of view, of methodologies and best practices. Ian, what new innovations can we expect along those lines?

Jagger: In all this complexity, it's a relatively simple path to follow. It all starts with discovery -- where are we today? Given what we know about business direction, where do we need to get to? What do we need to be capable of from a business technology perspective that incorporates a facility as a holistic or a hybrid view of those departments combined? What is it that they need to produce to support the business going forward?

Then, you have a gap. The next question is how do we fill that gap, how do we get there? Various strategies can accrue from that, depending on what your needs are.

We would look at that with customers, and we would sit down with them and ask them some pretty basic questions. Do you need to be where you are today? If you are in Phoenix, does the data center needs to be in Phoenix or could it be in Washington state? It’s cooler and you don't therefore have the energy costs that you would in Phoenix. So, let's have a look at that.

What is your position from a corporate social-responsibility perspective with respect to the environment? How visible are you in addressing that in comparison to your industry peers? What are the pressures on you to do that? So, let's have a look at alternate energy sources with respect to your data center.

For example, we have just announced our San Diego facility, which is now powered by solar panels. We are involved quite heavily right now in Iceland, providing geothermal technologies for data centers. So, a question there would be, can you be in Iceland? One issue there would be the question of latency. There are several questions that you would ask in terms of direction and how to get there.

Having answered those, you would move into planning and design phases and we would address those at that point too. We would build into the operation of any given new sites, or retrofitted site, the processes with respect to service management across the facility and IT structures. Service management is now not only about IT, but it’s about the facility as well, and how that is brought together in one motion.

So, it's pretty much a simple lifecycle approach within a complex field, and that will get you there. Along the process, we would be able to give the orders of magnitude of cost and typical ROI based on the strategies that you are looking to undertake.

Gardner: It certainly sounds like being efficient and getting this larger management capability over energy and facilities and resources is becoming a core competency and not an option. Is that fair to say?

Jagger: Yes. I think the spin on that is, going back to the example I just used of Wells Fargo and Wachovia, who do you turn to who can help you with that? You don't face that every day of your life, either within facilities or within IT, and you need help. You need to reach out for where the help is.

Traditionally, in our industry, as we have been discussing, it has tended to be siloed into real estate and into IT. What’s now required is the holistic view of infrastructure. I mean the physical infrastructure and the IT infrastructure. Customers need to reach out to firms that they feel comfortable reaching out to.

I think it was Andy who actually conducted this survey -- so correct me if I’m wrong, Andy. We recently undertook a survey in each of our worldwide regions, all enterprise customers. The finding was that the more the customers themselves had issues that they needed to address with respect to the environment and energy the more likely it was that they were going to come to HP as their vendor of choice.

Fisher: That's correct.

Gardner: Well, clearly if you don't have the holistic view you are going to have to learn how to get one, right?

Fisher: Right.

Gardner: Ian, let me direct this to you. I suppose there is some thought around environmental benefits and green IT, in which people believe that this is an additional cost or an expense. It seems to me, though, from what we have been discussing, that moving towards good environmental practices is actually moving towards good energy management practices too.

Jagger: That's absolutely right. It is not a choice of one or the other. Now, the business outcomes that come from energy management are also environmental outcomes, but there are apparent barriers to implementing environmental solutions, which, as you just, said are actually energy management solutions. Primarily they revolve around the lack of identifiable ROI or the payback period around any green improvement and then the measurements of that improvement itself.

More recently, we’ve been able to show customers the typical examples of how they can move through that environmental curve or that energy management curve going back to the industry standard benchmarks of PUE.

By showing them what a rough order of magnitude cost would be to move them grade by grade through the ranking system of energy efficiency, we show them what that cost would be, what the return would be, as a result of that in terms of carbon savings, in terms of dollar savings, and what the payback period would be based again on those dollar savings.

So, we can have a very strategic, yet tactical, view on how to approach this. A customer can take a larger view in terms of how far they want to go with their environmental approach and balance that with their energy-management approach.

There is obviously a curve here. The larger the investment in improving energy management, then the greater the return. At some point, that return slows down, because of the amounts of actual investment you have put in. So, there is a curve there, and we can show you how to get to any point along that curve.

Gardner: Excellent! We have been discussing the large global problem around energy management and how it has become more critical for IT operations -- energy not as an afterthought, but really the forethought and an overriding stratagem for how to conduct business in IT.

I want to thank our guests today. We have been joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in HP's Technology Solutions Group. Appreciate your input Ian.

Jagger: You’re very welcome, Dana, I am happy to have taken part.

Gardner: Andrew Fisher, the manager of technology strategy in the Industry Standard Services group at HP. Thank you, Andy.

Fisher: You are welcome.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

For more information on energy-efficiency in the data center, read the whitepaper.

For more information about HP Energy Efficiency Services.

For more information on HP Thermal Logic technology.

For more information on HP Adaptive Infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.