Showing posts with label John Bennett. Show all posts
Showing posts with label John Bennett. Show all posts

Friday, December 18, 2009

Careful Advance Planning Averts Costly Snafus in Data Center Migration Projects

Transcript of a sponsored BriefingsDirect podcast on proper planning for data-center transformation and migration.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the crucial migration phase when moving or modernizing data centers. So much planning and expensive effort goes into building new data centers or in conducting major improvements to existing ones, but too often there's short shrift in the actual "throwing of the switch" -- in the moving and migrating existing applications and data.

But, as new data center transformations pick up -- due to the financial pressures to boost overall IT efficiency -- so too should the early-and-often planning and thoughtful execution of the migration itself get proper attention. Therefore, our podcast at hand examines the best practices, risk mitigation tools, and requirements for conducting data center migrations properly, in ways that ensure successful overall data center improvement.

To help pave the way to making data center migrations come off nearly without a hitch, we're joined by three thought leaders from Hewlett-Packard (HP). Please join me in welcoming Peter Gilis, data center transformation architect for HP Technology Services. Welcome to the show, Peter.

Peter Gilis: Thank you. Hello, everyone.

Gardner: We're also joined by John Bennett, worldwide director, Data Center Transformation Solutions at HP. Welcome back, John.

John Bennett: Thank you very much, Dana. It's a delight to be here.

Gardner: Arnie McKinnis, worldwide product marketing manager for Data Center Modernization at HP Enterprise Services. Thanks for joining us, Arnie.

Arnie McKinnis: Thank you for including me, Dana. I appreciate it.

Gardner: John, tell me why migration, the process around the actual throwing of the switch -- and the planning that leads up to that -- are so essential nowadays?

New data centers

Bennett: Let's start by taking a look at why this has arisen as an issue. It makes the reasons almost self-evident. We see a great deal of activity in the marketplace right now of people designing and building new data centers. Of course for everyone who has successfully built their new data center, they have this wonderful new showcase site, and they have to move into it.

The reasons for this growth, the reasons for moving to other data centers, are fueled by a lot of different activities. Oftentimes, multiple factors come into play at the same organization.

In many cases it's related to growth. The organization and the business have been growing. The current facilities were inadequate for purpose, because of space or energy capacity reasons or because they were built 30 years ago, and so the organization decides that it has to either build a new data center or perhaps make use of a hosted data center. As a result, they are going to have to move into it.

It might be that they're engaged in a data-center strategy project as part of a data-center transformation, where they might have had too many data centers -- that was the case at Hewlett-Packard -- and consciously decided that they wanted to have fewer data centers built for the purposes of the organization. Once that strategy is put into place and executed, then, of course, they have to move into it.

We see in many cases that customers are looking at new data centers -- either ones they've built or are hosted and managed by others -- because of green strategy and green initiatives. They see that as a more cost-effective way for them to meet their green initiatives than to build their own data centers.

There are, of course, cost reductions. In many cases, people are investing in these types of activities on the premise that they will save substantial CAPEX and OPEX cost over time by having invested in new data centers or in data center moves.

Whether they're moving to a data center they own, moving to a data center owned and managed by someone else, or outsourcing their data center to a vendor like HP, in all cases you have to physically move the assets of the data center from one location to another.

The impact of doing that well is awfully high. If you don't do it well, you're going to impact the services provided by IT to the business. You're very likely, if you don't do it well, to impact your service level agreements (SLAs). And, should you have something really terrible happen, you may very well put your own job at risk.

So, the objective here is not only to take advantage of the new facilities or the new hosted site, but also to do so in a way that ensures the right continuity of business services. That ensures that service levels continue to be met, so that the business, the government, or the organization continues to operate without disruption, while this takes place. You might think of it, as our colleagues in Enterprise Services have put it, as changing the engine in the aircraft while it's flying.

Gardner: Peter, tell me, when is the right time to begin planning for this migration?

Migration is the last phase

Gilis: The planning starts, when you do a data-center transformation, and migration is actually the last phase of that data center transformation. The first thing that you do is a discovery, making sure that you know all about the current environment, not only the servers, the storage, and the network, but the applications and how they interact. Based on that, you decide how the new data center should look.

John, here is something where I do not completely agree with you. Most of the migrations today are not migration of the servers, the assets, but actually migration of the data. You start building a next-generation data center, most of the time with completely new assets that better fit what your company wants to achieve. This is not always possible, when your current environment is something like four or five years old, or sometimes even much older than that.

Gardner: Peter, how do you actually pull this off? How do you get that engine changed on the plane while keeping it flying? Obviously, most companies can't afford to go down for a week while this takes place.

Gilis: You should look at it in different ways. If you have a disaster strategy, then you have multiple days to recover. Actually, if you plan the disaster in a good fashion, then it will be easy to migrate.

On the other side, if you build your new engine, your new data center, and you have all the new equipment inside, the only thing that you need to do is migrate the data. There are a lot of techniques to migrate data online, or at least synchronize current data in the current data centers with the new data center.

Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.



So, the moment you switch off the computer in the first data center, you can immediately switch it on in the new data center. It may not be changing the engines online, but at least near-online.

Gardner: Arnie, tell me about some past disasters that have given us insights into how this should go properly? Are there any stories that come to mind about how not to do this properly?

McKinnis: There are all sorts of stories around not doing it properly. In most cases, you start doing the decompose of what went wrong during a project. Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.

It may even be as simple as looking at a changeover process that is currently in place seeing how that affects what is going to be the new changeover process. Potentially, there is some confusion. But it usually all goes back to not doing a proper assessment of the current mode of operations, or the current mode of that operating platform as it exists today.

Gardner: Now, Arnie, this must provide to you a unique opportunity -- as organizations are going to be moving from one data center to another -- to take a hard look at what they have got. I'm going to assume that not everything is going to go to the new data center.

Perhaps you're going to take an opportunity to sunset some apps, replace some with commodity services, or outsource others. So, this isn't just a one-directional migration. We're probably talking about a multi-headed dragon going in multiple directions. Is that the case?

Thinking it through

McKinnis: It's always the case. That's why, from Enterprise Services' standpoint, we look at it from who is going to manage it, if the client hasn't completely thought that out? In other words, potentially they haven't thought out the full future mode of what they want their operating environment to look like.

We're not necessarily talking about starting from a complete greenfield, but people have come to us in the past and said, "We want to outsource our data centers." Our next logical question is, "What do you mean by that?"

So, you start the dialog that goes down that path. And, on that path you may find out that what they really want to do is outsource to you, maybe not only their mission-critical applications, but also the backup and the disaster recovery of those applications.

When they first thought about it, maybe they didn't think through all of that. From an outsourcing perspective, companies don't always do 100 percent outsourcing of that data-center environment or that shared computing environment. It may be part of it. Part of it they keep in-house. Part of it they host with another service provider.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation. It's accessing what we currently have, but it's also designing what that future mode needs to look like.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation.



Gardner: Back to you, Peter. You mentioned the importance of data, and I imagine that when we go from traditional storage to new modes of storage, storage area networks (SANs) for example, we've got a lot of configuration and connection issues with how storage and data are used in conjunction with applications and processes. How do you manage that sort of connection and transformation of configuration issues?

Gilis: Well, there's not that much difference between local storage, SAN storage, or network attached storage (NAS) and what you designed. The only thing that you design or architect today is that basically every server or every single machine, virtual or physical, gets connected to a shared storage, and that shared storage should be replicated to a disaster recovery site.

That's basically the way you transfer the data from the current data centers to the new data centers, where you make sure that you build in disaster recovery capabilities from the moment you do the architecture of the new data center.

Gardner: Again, this must come back to a function of proper planning to do that well?

Know where you're going

Gilis: That's correct. If you don't do the planning, if you don't know where you're starting from and where you're going to, then it's like being on the ocean. Going in any direction will lead you anywhere, but it's probably not giving you the path to where you want to go. If you don't know where to go to, then don't start the journey.

Gardner: John Bennett, another tricky issue here is that when you transition from one organizational facility to another, or one sourcing set to another larger set, we're also dealing here with ownership trust. I guess that boils down to politics -- who controls what. We're not just managing technology, but we're managing people. How do we get a handle on that to make that move smoothly?

Bennett: Politics, in this case, is just the interaction and the interrelationship between the organizations and the enterprise. They're a fact of life. Of course, they would have already come into play, because getting approval to execute a project of this nature would almost of necessity involve senior executive reviews, if not board of director approval, especially if you're building your own data center.

But, the elements of trust come in, whether you're building a new data center or outsourcing, because people want to know that, after the event takes place, things will be better. "Better" can be defined as: a lot cheaper, better quality of service, and better meeting the needs of the organization.

This has to be addressed in the same way any other substantial effort is addressed -- in the personal relationships of the CIO and his or her senior staff with the other executives in the organization, and with a business case. You need measurement before and afterward in order to demonstrate success. Of course, good, if not flawless, execution of the data center strategy and transformation are in play here.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them.



The ownership issue may be affected in other ways. In many organizations it's not unusual for individual business units to have ownership of individual assets in the data center. If modernization is at play in the data center strategy, there may be some hand-holding necessary to work with the business units in making that happen. This happens whether you are doing modernization and virtualization in the context of existing data centers or in a migration. By the way, it's not different.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them. It's not rocket science, but it can be very painful sometimes.

Gardner: Again, it makes sense to be doing that early rather than later in the process.

Bennett: Oh, you have to do a lot of this before you even get approval to execute the project. By the time you get to the migration, if you don't have that in hand, people have to pray for it to go flawlessly.

Gardner: People don't like these sorts of surprises when it comes to their near and dear responsibilities?

Bennett: We can ask both Peter and Arnie to talk to this. Organizational engagement is very much a key part of our planning process in these activities.

Gardner: Arnie, tell us a little bit more about that process. The planning has to be inclusive, as we have discussed. We're talking about physical assets. We're talking about data, applications, organizational issues, people, and process. We haven’t talked about virtualization, but moving from physical to virtualized instances is also there. Give us a bit of a rundown of what HP brings to the table in trying to manage such a complex process.

It's an element of time

McKinnis: First of all, we have to realize that one of the things that happens in this whole process is that it's time. A client, at least when they start working with us from an outsourcing perspective, has come to the conclusion that they believe that a service provider can probably do it more efficiently and effectively and at a better price point than they can internally.

There are all sorts of decisions that go around that from a client perspective to get to that decision. In many cases, if you look at it from a technology standpoint, the point of decision is something around getting to an end of life on a platform or an application. Or, there is a new licensing cycle, either from a support standpoint or an operating system standpoint.

There is usually something that happens from a technology standpoint that says, "Hey look, we've got to make a big decision anyway. Do we want to invest going this way, that we have gone previously, or do we want to try a new direction?"

Once they make that decision, we look at outside providers. It can take anywhere from 12 to 18 months to go through the full cycle of working through all the proposals and all the due diligence to build that trust between the service provider and the client. Then, you get to the point, where you can actually make the decision of, "Yes, this is what we are going to do. This is the contract we are going to put in place." At that point, we start all the plans to get it done.

. . . There are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.



As you can see, it's not a trivial deal. We've seen some of these deals get half way through the process, and then the client decides, perhaps through personnel changes on the client side, or the service providers may decide that this isn't going quite the way that they feel it can be most successful. So, there are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.

There are lots of moving parts, and these things are usually very large. That's why, even though outsourcing contracts have changed, they are still large, are still multi-year, and there are still lots of moving parts.

When we look at the data center world, it just is one of those things where all of us take steps to make sure that we're always looking at the best case. We're always looking at what is the real case. We're always building toward what can happen and trying not to get too far ahead of ourselves.

This is little bit different than when you're just doing consulting and pure transformation and building that to the future environment. You can be a little bit more greenfield in your environment and the way you do things.

Gardner: I suppose the tendency is to get caught up in planning all about where you're ending up, your destination, and not focusing as much as you should on that all-important interim journey of getting there?

Keeping it together

McKinnis: From an outsourcing perspective, our organization takes it mostly from that state, probably more so than you could do in that future mode. For us, it's all about making sure that things do not fall apart while we are moving you forward. There are a lot of dual systems that get put in place. There are a lot of things that have to be kept running, while we are actually building that next environment.

Gilis: But, Arnie, that's exactly the same case when you don't do outsourcing. When you work with your client, and that's what it all comes down to, it should be a real partnership. If you don't work together, you will never do a good migration, whether it's outsourcing or non-outsourcing. At the end, the new data center must receive all of the assets or all of the data -- and it must work.

Most of the time, the people that know best how it used to work are the customers. If you don't work with and don't partner directly with the customer, then migration will be very, very difficult. Then, you'll hit the difficult parts that people know will fail, and if they don't inform you, you will have to solve the problem.

Gardner: Peter, as an architect, you must see that these customers you're dealing with are not all equal. There are going to be some in a position to do this better than others. I wonder whether there's something that they've done or put in place. Is it governance, change management, portfolio management, or configuration databases with a common repository of record? Are there certain things that help this naturally?

You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily.



Gilis: As you said, there are different customers. You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily. As we say, "Cut the elephant in pieces, because otherwise you can't swallow it."

Gardner: But, even the elephant itself might differ. How about you, John Bennett? Do you see some issues where there is some tendency toward some customers to have adopted certain practices, maybe ITIL, maybe service-oriented architecture (SOA), that make migration a bit smoother?

Bennett: There are many ways to approach this. Cutting up the elephant so you can eat it is a more interesting way of advising customers to build out their own roadmap of projects and activities, but, in the end, implement their own transformation.

In an ideal data center project, because it's such a significant effort, it's always very useful to take into consideration other modernization and technology initiatives, before and during, in order to make the migration effective.

For example, if you're going to do modernization of the infrastructure, have the new infrastructure housed in the new data center, and now you are just migrating data and applications instead of physical devices, then you have much better odds of it happening successfully.

Cleaning up internally

If you can do work with your applications or your business processes before you initiate the move, what you are doing is cleaning up the operations internally. Along the way, it's a discovery process, which Peter articulated as the very first step in the migration project. But, you're making the discovery process easier, because there are other activities you have to do.

Gardner: A lot of attention is being given to cloud computing at almost abstract level, but not too far-fetched. Taking advantage of cloud computing means being able to migrate a data center; large chunks of that elephant moving around. Is this something people are going to be doing more often?

Bennett: It's certainly a possibility. Adopting a cloud strategy for specific business services would let you take advantage of that, but in many of these environments today cloud isn't a practical solution yet for the broad diversity of business services they're providing.

We see that for many customers it's the move from dedicated islands of infrastructure, to a shared infrastructure model, a converged infrastructure, or an adaptive infrastructure. Those are significant steps forward with a great deal of value for them, even without getting all the way to cloud, but cloud is definitely on the horizon.

What we're moving toward, if done properly, is a breaking off, especially in the enterprise, of the security and compliance issues around data.



Gardner: Can we safely say, though, that we're seeing more frequent migrations and perhaps larger migrations?

McKinnis: In general, what we've seen is the hockey stick that's getting ready to happen with shared compute. I'll just throw it out there as what this stuff is in the data centers, kind of a shared-compute environment. What we're moving toward, if done properly, is a breaking off, especially in the enterprise, of the security and compliance issues around data.

There is this breaking off of what can be done, what should be done at the desktop or user level, what should be kept locally, and then what should be kept at a shared compute or a shared-services level.

Gardner: Perhaps we're moving toward an inflection point, where we're going to see a dramatic uptake in the need for doing migration activities?

McKinnis: I think we will. Cloud has put things back in people's heads around what can be put out there in that shared environment. I don't know that we've quite gotten through the process of whether it should be at a service provider location, my location, or within a very secure location at an outsourced environment.

Where to hold data

I don't think they've gotten to that at the enterprise level. But, they're not quite so convinced about giving users the ability to retain data and do that processing, have that application right there, held within that confinement of that laptop, or whatever it happens to be that they are interacting with. They're starting to see that it potentially should be held someplace else, so that the risk of that data isn't held at the local level. Do you understand where I am going with that?

Gardner: I do. I think we are seeing greater responsibility now being driven toward the data center, which is going to then force the re-architecting and the capacity issues, which will ultimately then require choices about sourcing, which will then of course require a variety of different migration activities.

McKinnis: Right. It's not just about a new server or a new application. Sometimes it's as much about, "How do I stay within compliance? Am I a public company or am I am a large government entity? How do I stay within my compliance and my regulations? How do I hold data? How do I have to process it?"

Even in the world of global service delivery, there are a lot of rules and regulations around where data can be stored. In that leveraged environment that a service provider provides, potentially storage is in somewhere in Eastern Europe, India, or in South America. There are plenty of compliance issues around where data can actually be held within certain governmental regulations, depending on where you are -- in country or out of country.

Planning is key -- not only planning the migration itself, but also doing "plan B" -- what if it doesn't work -- because then you have to go back to the old rule as soon as possible and within the time frame given.



Gardner: Let's move to Peter. Tell me a bit about some examples. Moving back to the migration itself, can you give us a sense of how this is done well, and if there are some metrics of success, when it is done well?

Gilis: As we already said in the beginning, it all depends on planning. Planning is key -- not only planning the migration itself, but also doing "plan B" -- what if it doesn't work -- because then you have to go back to the old rule as soon as possible and within the time frame given.

First, you need to plan, "Is my application suitable for a migration?" Sometimes, if you migrate your data centers from place A to place B -- as we've done in EMEA, from Czech Republic to Austria -- the distance of 350 kilometers gives an extra latency. If your programs, and we have tested them for the customer, already have performance problems, the little extra latency can just kill your program when you migrate.

One of the things we have done in that case is that we've tested it using a network simulator on a real-life machine. We found that the application was not adaptive, or the server was not adaptive for migration. If you know this beforehand, then you remove a risk by just migrating it on its own.

In another customer I saw that people had divided the whole migration process into multiple streams, but there was a lack of coordination between the streams. This means that if you have a shared application related to more than one stream, the planning of the one stream was totally in conflict with the planning of another stream. This means that the application and the data moved without informing the other streams, causing huge delays in real life, because the other applications were not synchronized anymore in the same way they used to be, assuming they were synchronized before.

So, if you don't plan and work together, you will definitely have failures.

Gardner: You mentioned something that was interesting about trying to do this on a test basis. I suppose that for that application development process, you'd want to have a test and dev and use some sort of a testbed, something that's up before you go into full production. Perhaps we also want to put some of these servers, data sets, and applications through some sort of a test to see if they are migration ready. Is that an important and essential part of this overall process?

Directly to the site

Gilis: If you can do it, it's excellent, but sometimes we still see in real life that not all customers have a complete test and dev environment, or not even an acceptance environment. Then, the only way to do it is to move the real-life machine directly to the new site.

I've actually seen it. It wasn't really a migration, but an upgrade of an SAP machine. Because of performance problems, the customer needed to migrate to a new, larger server. And, because of the pressure of the business, they didn't have time to move from test and dev, to acceptance, and to production. They started immediately with production.

At two o'clock in the morning we found that there was a bug in the new version and we had to roll back the whole migration and the whole upgrade. That's not the best time in the middle of the weekend.

Gardner: John Bennett, we've heard again and again today about how important it is to do this planning, to get it done upfront, and to get that cooperation as early as possible. So the big question for me now is how do you get started?

Bennett: How you get started depends on what your own capabilities and expertise are. If these are projects that you've undertaken before, there's no reason not to implement them in a similar manner. If they are not, it starts with the identification of the business services and the sequencing of how you want them to be moved into the new data center and provisioned over there.

We have successfully undertaken customer data center migration projects, which had minimal or zero operational disruption, by making clever use of short-term leases to ensure that business services continue to run, while they are transitioned to a new data center.



In order to plan that level of detail, you need to have, as Peter highlighted earlier, a really good understanding of everything you have. You need to fully build out a model of the assets you have, what they are doing, and what they are connected to, in order to figure out the right way to move them. You can do this manually, or you can make use of software like HP's Discovery and Dependency Mapping software.

If the size of this project is a little daunting to you, then of course the next step is to take advantage of someone like HP. We have Discovery Services, and, of course, we have a full suite of migration services available, with people trained and experienced in doing this to help customers move and migrate data centers, whether it's to their own or to an outsourced data center.

Peter talked about planning this with a disaster in mind to understand what downtime you can plan for. We have successfully undertaken customer data center migration projects, which had minimal or zero operational disruption, by making clever use of short-term leases to ensure that business services continue to run, while they are transitioned to a new data center. So, you can realize that too.

But, I'd also ask both Peter and Arnie here, who are much more experienced in this, to highlight the next level of detail. Just what goes into that effective planning, and how do you get started?

Gardner: I'd also like to hear that, Peter. In the future, I expect that, as always, new technologies will be developed to help on these complex issues. Looking forward, are there some hopeful signs that there is going to be a more automated way to undertake this?

Migration factory

Gilis: If you do a lot of migrations, and that's actually what most of the service companies like HP are doing, we know how to do migrations and how to treat some of the applications migrated as part of a "migration factory."

We actually built something like a migration factory, where teams are doing the same over and over all the time. So, if we have to move Oracle, we know exactly how to do this. If we have to move SAP, we know exactly how to do this.

That's like building a car in a factory. It's the same thing day in and day out, everyday. That's why customers are coming to service providers. Whether you go to an outsourcing or non-outsourcing, you should use a service provider that builds new data centers, transforms data centers, and does migration of data centers nearly every day.

Gardner: I'm afraid we're just about out of time and we're going to have to leave it there. I want to thank our guests for an insightful set of discussion points around data center migration.

As we said earlier, major setups and changes with data-center facilities often involve a lot of planning and expense, but sometimes not quite enough planning goes into the migration itself. Here to help us better understand and look towards better solutions around data center migration, we have been joined by Peter Gilis, data center transformation architect for HP Technology Services. Thanks so much, Peter.

Gilis: Thank you.

Gardner: Also John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks, John.

Bennett: You're most welcome, Dana.

Gardner: And lastly, Arnie McKinnis, worldwide product marketing manager for Data Center Modernization in HP Enterprise Services. Thanks for your input, Arnie.

McKinnis: Thank you, Dana. I've enjoyed being included here.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a sponsored BriefingsDirect podcast on proper planning for data-center transformation and migration. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, October 07, 2009

Long-Overdue Network Transformation Must Support Successful Data Center Modernization

Transcript of a BriefingsDirect Podcast examining how data-center transformation requires a new and convergent look at enterprise network architecture.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on reevaluating network architectures in light of newer and evolving demands. Most enterprise networks are the result of a patchwork effect of bringing in equipment as needed over the years to fight the fire of the day, with little emphasis on strategy and the anticipation of future requirements.

Nowadays, we see that network requirements have, and are, shifting, as IT departments adopt improvements such as virtualization, software as a service (SaaS), cloud computing, and service-oriented architecture (SOA).

The network loads and demands continue to shift under the weight of Web-facing applications and services, security and regulatory compliance, governance, ever-greater data sets, and global area service distribution and related performance management.

It doesn't make sense to embark upon a data-center transformation journey without a strong emphasis on network transformation as well. Indeed, the two ought to be brought together, converging to an increasing degree over time.

Here to help explain the evolving role of network transformation and to rationalize the strategic approach to planning and specifying present and future enterprise networks, is Lin Nease, director of Emerging Technologies, HP ProCurve. Welcome to the show, Lin.

Lin Nease: Thank you.

Gardner: We're also joined by John Bennett, worldwide director, Data Center Transformation Solutions at HP. Hello, John.

John Bennett: Hi, Dana.

Gardner: And, Mike Thessen, practice principal, Network Infrastructure Solutions Practice in the HP Network Solutions Group. Welcome to the show, Mike.

Mike Thessen: Thank you. Hello, everyone.

Gardner: John Bennett, let's start with you. Tell me a little bit about the typical enterprise network as it’s evolved, and how does that affect data-center transformation? [Related podcast: See how energy conservation factors into data center transformation.]

Helping customers

Bennett: Let's start by reminding people what data-center transformation is about. Data-center transformation is really about helping customers build out a next-generation data center, an adaptive infrastructure, that is designed to not only meet the current business needs, but to lay the foundation for the plans and strategies of the organization going forward.

In many cases, the IT infrastructure, including the facilities, the servers, the network, and storage environments can actually be a hindrance to investing more in business services and having the agility and flexibility that people want to have, and will need to have, in increasingly competitive environments.

When we talk about that, very typically we talk a lot about facilities, servers, and storage. For many people, the networking environment is ubiquitous. It's there. But, what we discover, when we lift the covers, is that you have an environment that may be taking lots of resources to manage and keep up-to-date.

You may have an environment that is not capable of moving network connections as quickly as servers, applications, and storage devices need and you want them to in order to meet your agility objectives.

We also find an environment that can be a cabling nightmare, because it has grown in a kind of an organic way over time. So, in looking at data center strategy and looking at data-center transformation, we have to make sure that the whole data-center architecture, including the network infrastructure both inside the data center and the services it provides to the organization, are really aligned to meet those goals and objectives.

It becomes increasingly important to have, as we continue to experience incredible explosions in storage and data volumes, with the new types of information, and with the historical information that's maintained.

The networking infrastructure becomes key, as an integration fabric, not just between users in business services, but also between the infrastructure devices in the data center itself.

That's why we need to look at network transformation to make sure that the networking environment itself is aligned to the strategies of the data center, that the data center infrastructure is architected to support those goals, and that you transform what you have and what you have grown historically over decades into what hopefully will be a "lean, mean, fighting machine."

Gardner: Lin Nease, from the perspective of an architect, is there a lag, or perhaps a disconnect, between the trajectory and evolution of networks and where the entire data center has been moving toward?

Multiple constituencies

Nease: Absolutely. The network has basically evolved as a result of the emergence of the Internet and all forms of communications that share the network as a system. The server side of the network, where applications are hosted, is only one dimension that tugs at the network design in terms of requirements.

You find that the needs of any particular corner of the enterprise network can easily be lost on the network, because the network, as a whole, is designed for multiple constituencies, and those constituencies have created a lot of situations and requirements that are in themselves special cases.

In the data center, in particular, we've seen the emergence of a formalized virtualization layer now coming about and many, many server connections that are no longer physical. The history of networking says that I can take advantage of the fact that I have this concept of a link or a port that is one-to-one with a particular service.

That is no longer the case. What we’re seeing with virtualization is challenging the current design of the network. That is one of the requirements that are tugging at a change or provoking a change in overall enterprise network design.

Gardner: Mike Thessen, from a systems integrator problem set, have there been different constituencies, perhaps even entirely different agendas, at work in how networks are put together and then how the data center requirements are coming around?

Thessen: Sure. From the integrator perspective, we get involved with clients' real problems and real requirements. People listen to the press and they are trying to do the right thing. What we are finding is that, many times, they get distracted by that and lose sight of the fact that what they're really trying to do is provide access to applications within their data center to their user base.

We try to bring them back around, when we work with them on a consulting basis, to not so much focus on products, but on what they are trying to achieve overall at a very high level -- just to get things started.

Gardner: Is there a new philosophy perhaps that needs to be brought to the table, Mike, around planning for network, data center, and storage requirements in tandem? This seems to have been something that happened on a fairly linear basis in the past. How do we get that to be simultaneous, or is that the right way to go?

Thessen: In my mind, you are really talking about collaboration. Data-center networking certainly cannot happen in a vacuum. In years past, you were effectively just providing local area network (LAN) and wide area network (WAN) connectivity. Servers were on the network, and they got facilities from the network to transport their data over to the users.

Now, everything is becoming converged over this network -- "everything" being data storage, and telephony. So, it's requiring more towers inside of corporate IT to come together to truly understand how this system is going to work together.

Gardner: Lin Nease, is this about services orientation? Do some of the same methods and best practices and architectural approaches from the application side come to bear on the network features and functions as well?

The only way out

Nease: Absolutely. In fact, that's the only way out. With the new complexity that has emerged, and the fact that traditional designs can no longer rely on physical barriers to implement policies, we have reached a point, where we need an architecture for the network that builds in explicit concepts of policy decisions and policy enforcement. They're not always in the same place, and it's not always intuitive where a policy should be enforced or decided upon.

As a result of that, the only way out is to regard the network itself as a service that provides connectivity between stations -- call them logical servers, call them users, or call them applications. In fact, that very layering alone has forced us to think through the concept of offering the network as a service.

So, service orientation is crucial and it will allow those who build infrastructure to no longer be forced into an ad-hoc situation, where they build infrastructure in an extremely fluid manner with respect to how applications are being designed, but rather to a much more formal presentation of what they are doing as a server. That presentation becomes the designed target for both sides, and, as I said, that's probably the only way out with the complexity that's emerged.

Just to give you an example, look at a virtual server today and some of the new technologies that are being proposed like Single Root I/O virtualization combined with virtual switching, combined with edge, blades, switching, and standalone switches. You could have seven or eight queues that separate an application from just the core of the network. That's far more than in the past. That complexity is going to cause applications to break. So, this mentality is probably the only way out.

Gardner: I'd like to drill down, if we could, on virtualization. There are several different layers and levels of this course. We're starting to even hear more about desktop virtualization, PC-over-IP, and different approaches to bring the essence of an operating system environment to the user, but without them really having the actual compute power locally.

Let's go to back to John Bennett. John, tell us a little bit about the different dimensions of virtualization, and how that has an impact on this complexity issue in the network?

Desktop virtualization is seen as a major opportunity to not only provide for better control of desktop devices, but also better security and better protection of end-user data.



Bennett: Virtualization is a major theme. Ann Livermore in her keynote at VMworld challenged people to think about moving from virtualizing servers to virtualizing their infrastructure, and even go beyond that. Server virtualization has just been the starting point for people moving to a shared infrastructure.

In parallel with that, we see an increasing drive and demand for virtualizing storage to have it both be more efficiently and effectively used inside the data center environment, but also to service and support the virtualized business services running in virtualized servers. That, in turn, carries into the networking fabric of making sure that you can manage the network connections on the fly, as Lin talked about.

Then, it reaches outside of the data center. Desktop virtualization is seen as a major opportunity to not only provide for better control of desktop devices, but also better security and better protection of end-user data. Now, you are taking an environment that used to run locally in an office with just data connections back to the data center to now you have an environment which depends upon the data center for all of the services that are being provisioned on a virtualized desktop. So, you have that complexity taking place as well.

Virtualization is not only becoming pervasive, but clearly the networking fabric itself is going to be key to delivering high quality business services in that environment.

Gardner: Back to Mike Thessen. Tell me a bit about this from the integration perspective. How do these virtualization complexities need to be considered as folks move towards a network transformation of some sort?

Understanding requirements

Thessen: Virtualization, from the network perspective, really centers on several aspects. First, from the system and application perspective, we have to understand the requirements of how blade server interconnectivity is going to be achieved, how things like dynamic movement of hypervisors will be managed, and basically how much Layer 2 adjacency is required in the network.

While Layer 2 is expanding in the data center, it really needs to be contained such that it's limited to what is required within a pod, cell, module, or whatever the term is that a client may use to define a span of the data center.

We don't want to allow Layer 2 domains to expand across the entire data center or be unlimited between data centers. We want to contain this Layer 2 environment. While it's getting bigger we don't want to have the attitude that we'll just allow it to go everywhere. There will still be issues with that large span of Layer 2 Ethernet connectivity, and from a manageability perspective it gets very complex.

Second, there is a trend to utilize network device virtualization to eliminate the need for things like spanning tree, the redundant default gateway mechanisms, and so forth. Those are different ways to use technology to expand the Layer 2 domains, but limit the risk associated with that.

Third, this tends to utilize device and routing control-plane virtualization for logical separation of external-facing applications, especially in things like financial industries, and so forth.

We really promote having our clients spend the extra money to have that lab always available to test things in.



The test and development and QA environments are extremely important, especially as things become more virtualized. Things really have to be tested. We really promote having our clients spend the extra money to have that lab always available to test things in. Then, naturally, how do we do that more cheaply and less expensively?

You can use certain virtualization techniques in the networking hardware to separate those environments in a logical manner, as opposed to having to buy completely separate networks to do your testing.

The fourth thing is that networks need to be prepared for the convergence of the communication paths for data and storage connectivity inside the data center. That's the whole conversion -- enhance, Ethernet, Fiber Channel over Ethernet. That's the newest leg of the virtualization aspect of the data center.

Gardner: Of course, nowadays, the IT and other departments, and telecom folks are all under pressure to cut cost. So, if we are going to transform networks, not only do we have to look at complexity and bringing up the support of additional requirements, but folks are looking for efficiencies as well.

Lin Nease, what about network transformation? It can allow our requirements to be met, but perhaps, at the same time, find efficiencies, higher utilization, and cut overall cost.

Accounting for use cases

Nease: It's important to account for all the use cases that are critical to the enterprise, and it's possible to design networks that have what I will call a most common denominator. What we've found with HP's own huge data center consolidation was that ruthless standardization was the key to cutting cost.

The way I cut cost is I don't have an artificial metric like server utilization, CPU utilization or network utilization. I have a simple metric of budget. And from budget, comes all other mechanisms for optimization.

A most common denominator network is another way of saying, "I can build a substrate of the data center network at least for this point in time, call it a pod, call it a cell, call it a unit of modularity. I can put it in place and it will solve every use case I care about -- everything from the high balance, low latency requirements of middleware clustering or database clustering all the way down to the mundane."

I can cross server tiers for example with relatively low bandwidth requirements. But I can do that all from a very high performance substrate. If I have one design, I have only one thing I need to manage. Operationally, it solves multiple problems at once. But, rather than being purpose built for each application, now the network is built once as a standard. As a result of the fact that it solves all the problems, I can change the nature of what my change review-boards look like, for example.

If I want to go in and put in new servers, I don't have to worry about including someone from a particular department in the decision, because I know the network works for all the use cases I care about.

The key for networking in particular is, number one, to enable higher utilization of servers. . . .Then, secondly, to make sure that the thermal design of the data center is optimized.



Gardner: Lin, as we try to get that overall perspective of a solution approach, do we also find ourselves able to cut energy use? That seems to be an important part of a lot of transformation initiatives as well.

Nease: Oddly enough, on one side of the coin, I just talked about ruthless standardization. The simplest way to drive down the cost of my process is to overkill. On the other hand, now we have the concept that overkill is bad, because it consumes a lot of energy. Well, here's our way out. Here's the degree of freedom we have.

Thermal management is probably the biggest hitter in terms of energy savings and consumption, going forward. The key for networking in particular is, number one, to enable higher utilization of servers. That's the most direct way of saving on energy. Then, secondly, to make sure that the thermal design of the data center is optimized.

It's quite possible to have a completely independent architectural approach to logical topology, versus the approach to physics. In this case, when I say, physics, I mean how I support the hot aisle, cold aisle, the ventilation, and the energy pressure drops.

If I can service a cooling aspect of the data center, it turns out that air conditioning and cooling account for more than half the energy consumption in a typical data center. So, by optimizing on the thermal front, I can still have a very simple network and I can separate the concern by topology architecture.

Gardner: We hear the term "convergence" batted around a lot, and just from our call today, I can tell that convergence really has multiple aspects and perhaps even multiple levels of convergence.

Back to you, John Bennett. In talking about our network transformation philosophy, are we converging storage and data with applications. Is it data center and network? Is it on-premises and off-premises or cloud? How can we set a taxonomy or get a handle on what we mean by convergence nowadays?

Better integration

Bennett: Fundamentally, convergence is about better integration across the technology stacks that help deliver business services. We're saying that we don't need separate, dedicated connections between servers for high availability from the connections that we use to the storage devices to have both a high-volume traffic and high-frequency traffic accesses to data for the business services or that we have for the network devices and the connections between them for the topology of the networking environment.

Rather, we are saying that today we can have one environment capable of supporting all of these needs, architected properly for particular customer's needs, and we bring into the environment separate communications infrastructures for voice.

So, we're really establishing, in effect, a common nervous system. Think about the data center and the organization as the human body. We're really building up the nervous system, connecting everything in the body effectively, both for high-volume needs and for high-frequency access needs.

Gardner: Mike Thessen, we've heard here about the need for convergence and managing complexity. We're hearing about a lot more services coming into play. It now sounds as if we are taking what people refer to as a utility or grid approach for data and applications and we are applying that now to networks. Is that how we should be thinking about this, instead of getting bogged down in convergence? Is this really more of a cloud or fabric approach that includes network services?

Thessen: As someone said, a few minutes ago, at some level of the network the primary aspect needs to be utility. When you're talking about clouds, clouds can be not what people think about clouds from Amazon or wherever, but even clouds inside a client's own IT environment. So, it's possible to do something like replace the way they typically would do external client access to their data. These things come to mind especially in the financial industry.

Without understanding who is talking to whom, how applications communicate, and how applications get access to other IT services, such as directory services and so forth, it's really difficult to secure them appropriately.



The most important thing is really still the brutal standardization, as Lin said -- network modularity, logical separation, utilizing those virtualization techniques that I talked about a few minutes ago, and very well-defined communications flows for those applications. When things may not go right, when things break, or when there are performance issues, there is documentation there that defines who is talking to what.

Additionally, you need those communication flows especially in these SaaS or cloud-computing, or convergence environments to truly secure those environments appropriately. Without understanding who is talking to whom, how applications communicate, and how applications get access to other IT services, such as directory services and so forth, it's really difficult to secure them appropriately.

We haven't talked about WAN very much. We've been focused on the data center. But, data centers are more or less useless, without people being able to access them through some sort of wide-area facility.

We focus a lot on determining how these new applications are going to communicate over the WAN by doing dependency mapping of the applications and by doing transaction profiling of the applications from the network perspective. We identify, not only how much bandwidth required by transaction and how many users they're going to be hitting at any given point in time, but also how latency is going to affect the end-user experience.

If you move everything into the cloud, which implies virtual centralization, the users are now more separated from that. So, you really have to pay close attention to how latency is going to affect these new environments.

Gardner: It certainly seems to be an awful lot to chew, to bite off, and factor when we move towards network transformation. I wonder what some of the common mistakes are that people make as they approach a certain path, a crawl-walk-run, or a methodological, or an architectural overview. What might they do that perhaps prevents them from getting to where they want to be? Let's start with you Lin. What are some common mistakes people get into as they start to move towards network transformation?

The most common mistake

Nease: This one is near and dear to my heart, being an evangelist for the networking business. Too often people are compelled by a technology approach to rethink how they are doing networking. IT professionals will hear the overtures of various vendors saying, "This is the next greatest technology. It will maybe enable you to do all sorts of new things." Then, people waste a lot of time focusing on the technology enablement, without actually starting with what the heck they're trying to enable in the first place.

Unfortunately, I think this is the most common mistake by far. I'll give you an example of how the inevitability of some technology trend will probably lead people far less down a path of great optimization that what they are thinking, and this is in convergence of storage traffic with data traffic.

There is a big difference between storage traffic and voice traffic. Voice traffic is limited by the perceptions of human beings. It will never require more bandwidth for a phone call, and probably less than it does today, as technology evolves. It's very easy to incorporate that into a common plumbing.

Storage, on the other hand, is directly tied to server performance. Storage is going to continuously grow in terms of requirements. There is so much focus on replacing the technologies that we don't like that we forget about what we're trying to enable from an application perspective.

How do I have applications that are deployed on infrastructure that follow the potential energy of business requirement changes, rather than first focusing on how the plumbing works? That's the biggest mistake people make. What they'll find is that they'll look at a lot of these technologies. Two years from now, you'll see networks that look quite a bit like they do today, because the focus has not been on enabling what it is that people are trying to actually accomplish in their business.

Gardner: Mike Thessen, the same question. Are there some common mistakes, from your vantage point, as a systems integrator, that folks fall prey to as they move into network transformation mode?

We prefer to get in earlier, and really strategize with the client -- what are we trying to do and what are we trying to come from and get to.



Thessen: Lin focused on picking a technology. If you take that a step farther, lots of times our clients are hell-bent on picking a specific product or a specific vendor, prior to actually defining the requirements.

I have a couple of sayings that a bill of materials is not a design and it takes a lot of effort to get a PowerPoint presentation into something that you can actually implement. We often see clients coming to us and they've already got a bill of materials. Then, they want you to back into an architecture and a design an implementation strategy.

As I said earlier, we take a different approach. We prefer to get in earlier, and really strategize with the client -- what are we trying to do and what are we trying to come from and get to.

Testing is required

I mentioned this earlier. Sufficient testing of this new technology in a dedicated lab environment is absolutely required, whether you are talking about how the applications are going to work, or just making sure that you can get the network components working together properly with all the new features and functions that you might want to implement. It's absolutely key, especially for the data center environments, to have that test.

Sometimes, we also see that the need for one gigabyte, or lower, speed transports are being forgotten about. Everybody is all wrapped up around 10 gig, 10 gig, 10 gig -- they've got to have it everywhere.

We typically recommend a mix of 1 gig and 10 gig, based on the requirements for the existing servers. Ten gig coming from blade is absolutely right on target, but do we really need 10 gig for every server on the network? Probably not, at this time.

What we need to look at is what is the real performance of these services and when the next technology refresh cycle for these servers is going to occur. Possibly, if we do our modularity and standardization process right, we can rev servers and network gears simultaneously within those modules, and really keep our cost down, as opposed to piling ten gig on everything right now.

A lot of times, our clients also forget about the fact that a lot of the management interfaces on some equipment don't even support one gig yet. So, there is this mix of the technologies and products that need to come together and they really need to be thought through, before you go out and buy the bill of materials prior to having all your requirements in check.

The law of large numbers said that they didn't actually have to build an extremely complex network to get big gains, and there is a lot more behind that than you might think.



Gardner: Lin Nease, how about some examples of where folks in the field have embarked on a network transformation, and taken into consideration some of these issues that we have been discussing today? What have they found? What are some of the paybacks? Any examples of success or perhaps things to avoid?

Nease: I'll start with the biggest success, and it's very close to home -- Hewlett-Packard's own data center consolidation -- 85 data centers down to six. The thinking on the network, which was very consistent thinking overall, was simplicity. If you were desperate and you had to actually have a low budget, what types of actions would you take? If you don't take those actions, you should justify the benefits of doing something different than what you would do well, if you are desperate.

It wasn't desperation, but rather sheer cost savings, incredible cost savings, that HP got out of IT. They actually deployed 5,000 of our ProCurve devices, for example, in a network that was deliberately kept extraordinarily simple. As we talked about earlier, we ruthlessly standardized the whole VLAN topology. The approach to layering was kept very, very simple.

The law of large numbers said that they didn't actually have to build an extremely complex network to get big gains, and there is a lot more behind that than you might think. In the process of doing that in the network, HP saved well over $90 million in the deployment of just six data centers on network gear alone.

Gardner: Mike Thessen, do any examples come to mind in terms of how this should be done in the field?

Right on track

Thessen: We're working in all kinds of environments all the time, from financials, to manufacturing, to retail. The things we are covering here today are all right on track with what our clients are asking for: "How do we implement virtualization? I want to consolidate my voice and my data infrastructures. I want to be prepared for any level of convergence within the data center from a storage and data perspective, but I want to do it in the right way."

We had several instances where we put in some of the converged infrastructures just from a future-proofing perspective, because the client's timeline was right. In other cases, we talked the client out of going down that path and doing what I typically call more of a discrete network non-converged, if you will, data, and fiber channel over Ethernet topology, just because they were standardizing more on blades.

They had already more or less achieved a lot of their cabling reductions, because of the mechanisms they were using within the blades environments. So, they were able to leverage existing cabling infrastructures and didn't have to add anymore. Still, they took advantage of a lot of the cost-reduction features simply by implementing a different computing platform as opposed to going down to the full converged type data center network and storage environment.

Basically, do the analysis to architect and build it out yourself. Then, make use of current generation tools and capabilities, as you do that.



It all depends on what the requirements of the client are. As an integrator, that's our first step -- what are the requirements -- and then matching technology and products to it.

Gardner: John Bennett, we are working towards network transformation. We're certainly seeing a lot of data-center transformation, bringing the two together in a cohesive, organized, perhaps harmonious way -- maybe that's wishful thinking. How do you get started on that? How do you bring these things together, and what are some of the initial steps you expect to see from people to do this successfully?

Bennett: Harmonious is actually something you can expect, and Lin has made note of the HP example here several times. That's definitely a nice outcome to have, and a possible outcome to have.

How do you get started? If you have the capabilities in-house to really do your own networking, architectures, and business process analysis, then take a step back and revisit what you've done in the past. Take a hard look at your networking environment and look at where you need to have the business in the organization running in three to five years. Basically, do the analysis to architect and build it out yourself. Then, make use of current generation tools and capabilities, as you do that.

Clearly, if you are virtualizing your infrastructure and moving in those directions if you want to automate a great deal of the data center environment in the business services you are running, you need to have a clean networking infrastructure as much as you have a clean storage and server environment. The ruthless standardization is foundational for doing that.

If you don't have that experience, if I were a customer, I'd be calling for help, because networking is one of the areas that is most challenging for me personally. Take advantage of a system integrator like HP and their capabilities.

Mapping your strategy

We have people who can come in and not tell you what to do, but work with you to map your business strategy to your IT and your data center strategies, and then look at what you should do over time, in order to change from what you have been to what you can be.

So, if you are self-capable, take a strategic look at it. If you want to take advantage of the experience -- and we have been doing networking since it was created -- take advantage of the experts, someone like HP, and really take that fresh look, and then the implementation and plans after that all follow, but focus on the strategy and the architecture first.

Gardner: Mike Thessen, any rules of thumb that you fall back to, when folks come to you and ask how to get started? What are the first things we need to start doing or thinking about?

Thessen: What we focus on is really developing a good strategy first. Then, we define the requirements that go along with business strategy, perform analysis work against the current situation and the future state requirements, and then develop the solutions specific for the client's particular situation, utilizing perhaps a mix of products and technologies.

Don't look at the data-center network the same way you look at the enterprise network. It is different.



One thing to note here is that HP makes networking product. We make great blade products. We have more or less everything a client would need, if it fits their solution. From our perspective in the Network Solutions Group, we know that HP solutions aren't going to fit in every case. So, we are still one of the largest Cisco worldwide gold partners. We have a vast array of other partnerships in the network space to bring together the right solution for our clients.

Gardner: The last word today goes to you, Lin Nease. Tell me a bit about what your opening salvo is when folks come to you and say, "Wow, an awful lot to think about. How do we put this into a chunk that we can get started on?"

Nease: The advice to the network architect is to look at the portfolio of applications that you are trying to enable. Don't look at the data-center network the same way you look at the enterprise network. It is different. It is specialized. Consider strongly those unique special cases that you handle today as exceptions. Think of how you would handle them more as a mainstream provider.

Be honest with which applications are going in what ways and what demands will be on the network in the future. Also, look to simplify. Always assume that your first step as an architect is to figure out how to simplify what you are trying to accomplish in your data center network design.

Gardner: Very good. I want to thank you all for joining us. We have been on a sponsored podcast discussion today on transforming network architectures in anticipation of evolving demand.

I want to thank our panel. We've been joined by Lin Nease, director of Emerging Technologies, HP ProCurve. Thank you, sir.

Nease: Thank you very much.

Gardner: John Bennett, worldwide director, Data-Center Transformation Solutions at HP.

Bennett: Thank you, Dana.

Gardner: And, Mike Thessen, practice principal, Network Infrastructure Solutions, practice in the HP Network Solutions Group. Great to have you with us, Mike.

Thessen: Thank you, everyone.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Transcript of a BriefingsDirect Podcast examining how data-center transformation requires a new and convergent look at enterprise network architecture. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, October 05, 2009

HP Roadmap Dramatically Reduces Energy Consumption Across Data Centers

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on significantly reducing energy consumption across data centers. Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

In this discussion, we'll examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help us learn more about significantly reducing energy consumption across data centers, we are joined by two experts from HP. Please welcome John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks for joining, John.

John Bennett: Delighted to be here with you today, Dana. Thanks.

Gardner: We are also joined by Ian Jagger, worldwide marketing manager for Data Center Services at HP. Good to have you with us, Ian.

Ian Jagger: And, equally happy to be here, Dana.

Gardner: John Bennett, let's start with you, if you don't mind. Just upfront, are there certain mistakes that energy-minded planners often make, or are there perhaps some common misconceptions that trip up those who are beginning this energy journey?

Bennett: I don't know if there are things that I would characterize as missteps or misconceptions.

We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

Gardner: So, there needs to be some sort of a rationalization for how you approach this, not necessarily on a linear, or even what comes to mind first, but something that adds that strategic benefit.

Cherry picking quick wins

Bennett: I am not even sure I'd characterize it as strategic yet. It's just understanding the business value and cherry picking the quick wins and the highest return ones first.

Gardner: Let's go and do some cherry picking. What are some of the top, must-do items that won't vary very much from data center to data center? Are there certain universals that one needs to consider?

Bennett: We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

On the facility side, and we are probably better off asking Ian to go through this list, running a

There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices.

data center hotter is one of the most obvious ones. I saw a survey just the other day on the Web. It highlighted the fact that people are running their data centers too cold. You should sweat in a data center.

Lot of techniques like hot and cold aisles, looking at how you provide power to the racks and the infrastructure are all things that can be done, but the list is well understood.

Because he is more insightful in this and experienced in this than I am, I'll ask Ian to identify some of the top best practices from the facilities and the infrastructure side, as well.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

But, you're right. There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices. Starting with the easy ones first, such as hot and cold aisles, blanking panels, being tidy with respect to cabling -- have cabling run under the floor, and items like that doesn't, as you alluded to, necessarily provide the best return on investment (ROI), simply because it's a best practice.

Areas of focus

When we undertake energy analysis for our customers, we tend to find the areas of focus would be around air management and environmental control -- very much to the point you mentioned about turning up the heat with respect to handling units -- and also recommendations around electrical systems and uninterruptable power supply (UPS).

Those are the areas of primary focus, and it can drill down from there on a case-by-case basis as to what works for each particular customer.

Gardner: Ian, what causes the variability from site to site? Clearly, there are some common things here that we have talked about, but what is it specifically that differentiates organizations, and they need to be mindful that they can't just follow a routine and expect to get the same results?

Jagger: Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.



But there are instances where, for example, we could say to a customer, "Shut down some of your computer-room air conditioners (CRACs)," and we would identify which ones that should be shut down and how many of them. That clearly would create some significant savings. It doesn't cost anything to do that. Clearly, the ROI is much higher, because there is no capital expenditure that is required to shut down CRACs. That would be one good example.

Another example is placing floor grilles correctly, which would be on anybody's best practice list, and can have a significant impact in the scheme of things. So case-by-case would be the answer, Dana.

Gardner: Given that we have some best practices and some variability from organization to organization, let's look at these four basic areas and then drill down into each one. John Bennett, virtualization. What are the big implications for this? Why is this so important when we think about the total energy picture?

Bennett: If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

High utilization

This is especially a factor for industry standard servers. Historically, whether it's mainframes, HP-UX systems, or HP Integrity NonStop systems, customers are very accustomed to running those at very high utilization rates -- 70, 80, 90 percent plus.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

This can be further improved, as I mentioned earlier, by taking a look at the applications portfolio and doing application modernization, which has two benefits from an energy point of view.

One of them is that it allows the new applications to run on a modern infrastructure environment, so it can participate in the shared environment. Secondly, it allows you to eliminate legacy systems, sometimes very old systems, where very old is anywhere from 5 to 10 years in age or more, and eliminate the power consumption that those systems require.

You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.



Those are the benefits of virtualization, and very clearly anyone dealing with either energy cost issues or energy constraint issues or with a green mandate needs to be looking very seriously at virtualization.

Gardner: What sorts of paybacks are typical with virtualization? Is this a rounding error, a significant change, or is there some significant variability in terms of how it pans out?

Bennett: No, it's significant. It's not a rounding error. We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Gardner: Correct me, if I am wrong, John, but virtualization helps, when we want to whittle down the number of servers while we increase utilization. Doesn't virtualization also help you to expand and scale out as your demands might increase, but at a level consummate with the demand, rather than in large chunks, which may have been the case without virtualization?

Rapid provisioning

Bennett: Oh, yes. I could talk for the rest of this podcast just about virtualization benefits, so don't let me get started. But, very clearly, we see benefits in areas like flexibility and agility, to use the marketing terms, but also the ability to provision resources very quickly. We see customers moving from operational models, where it would take them weeks or months to deploy a new business service, to where they are able to do it in hours.

We see them able to shift resources to where they are needed, when they are needed, in a much more dynamic fashion.

We see improvements in quality of service, as a result of those things. We actually see availability in business continuity benefits from these. So virtualization is -- in my mind, and I have said this before -- as fundamental a data center technology as server storage and networking are.

Gardner: It seems that virtualization is the gift that keeps on giving. Not only do you get a significant reduction in energy cost when you replace older systems and bring in virtualization to increase utilization, but, as you point out, over time, your energy consumption, based on demand, would be low given this ability to provision so effectively and given the ability to get more out of existing systems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.



Bennett: Yes, absolutely.

Gardner: Do you have any examples? Do you have a specific customers or someone that HP has worked with who has instituted virtualization and then has come back with an energy result?

Bennett: We have a number of examples. I'll just share one example here.

The First American Corporation, America's largest provider of business information, had the requirement of being able to better align their resources to business growth in a number of business services, and also were looking to reduce energy costs; two very simple focuses. They implemented a consolidation and virtualization solutions built around HP BladeSystems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.

Gardner: So that spells ROI pretty swiftly?

Bennett: Oh, yes, absolutely.

Gardner: Ian Jagger, let's go to you now on this next major topic -- application modernization. I've also heard this referred to as "cash for clunkers." What do we mean by that?

Investment opportunity


Jagger: There is a parallel that can be drawn there in sense of trading in those clunkers for new cash that can be invested within modernization projects.

John has done a great job talking about virtualization and its parallel, application modernization. I'd like to pull those two together in a certain way. If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

I mentioned the word “converged” earlier. Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.



Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

Gardner: It sounds that with these PODs that are somewhat functionally specific we are almost mapping a service-oriented architecture (SOA) to the data center facility. Is that a fair comparison?

Jagger: Yeah. It's a case of understanding the application catalog, mapping that availability and prioritization requirement, allowing for growth, and allowing for certain levels of redundancy that ultimately you can then build a POD structure within your data center.

You don't need UPS, for example, for everything. You don't need to end redundancy or twice redundancy for all applications. They are not all that critical and therefore why should we treat them as all being critical.

Gardner: A big part of being energy wise is really just being smart about how you understand your requirements and then apply the resources -- not too much, not too little -- sort of the Goldilocks's approach -- just right.

Talk to your utility

Jagger: One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Gardner: Perhaps to reverse engineer from the energy source itself and find the best ways to work with that.

Jagger: Right.

Gardner: John Bennett, is there anything that you would like to add to the topic of application modernization for energy conservation?

Bennett: I'd like to comment a bit about the point made earlier about thinking smarter. What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

. . . In working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value and a lot of significant savings to the organization.



It's not just the applications and the portfolio, which Ian has spoken of, and the infrastructure from a server, storage, and networking perspective. It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

Gardner: Let's move on to our next major category -- data center infrastructure best practices. Again, this is related to these issues of virtualizing and finding the right modernization approaches. Are there ongoing ways in which business as usual in the data center does not work to our advantage when we consider energy? Let's start with you, Ian.

Jagger: As we talked about earlier in terms of best practices, it doesn't necessarily follow that a given best practice returns the best results. I think there has to be an openness on behalf of the company itself on what actions it should take, with respect to driving down energy costs and ensuring solid ROI on any capital expenditure that's required to do that.

Just for example, I mentioned earlier that shutting off CRAC units would be one of the best practices, and turning the temperature up produces certain results.

Payback opportunity

I am thinking of one particular customer, where we suggested that they shut down three CRAC units. Now, that would give them a certain saving, but the cost of some of the work that would have to be done with that equaled the amounts of saving for the first year. So, there is a one-year payback there, and of course the rest is all payback after that point.

But yet, with the same customer, we looked at and advised to say, well, if you use chillers with variable speed compressors, instead of constant speed compressors, then there is certainly a capital requirement there. In the case of this customer, it was about $300,000. But the return on that was $360,000 in one year.

That investment created a larger return on payback than simply shutting down the three CRAC units or indeed the correct placement of floor grilles within the data center.

That was a case not of best practice, but having higher impact than best practice itself. It's not easy for customers to get into the detail of this. This is where expertise comes into it. We need to go beyond the typical list of best practices areas of expertise, and how that expertise can highlight specific areas of payback and ROI and where the business or the IT can actually justify the cost of doing the work.

We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.



Gardner: John Bennett, when it comes to leveraging expertise in order to bring about these efficiencies and make the right choices on how to invest on this ongoing best practices continuum, how does HP enter into this?

What are some ways in which the expertise that you've developed as a company working with many customers over many years, come to bear on some of these new customers or new instances of requirements around energy?

Bennett: We can bring it to bear in a number of ways. For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking that we talked about earlier. We'll ask Ian perhaps to talk a little more about that service in a moment.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.

One element that is considered are the facilities and the data centers themselves. It may very well end up saying, "You need a data-center strategy project. You need to have an analysis done of the applications portfolio to business services to understand how many data centers you have, where they should be, what kinds they should be, what you should do with the data centers you have." Or, it may be that the data centers are not an issue for that particular customer.

Gardner: Another big area where cost plays into these operational budgets, the ongoing budgets, is labor. Is there a relationship between labor in the IT operations and energy? Is there some way for these two very large line items within the IT budget; labor and energy, to play off of one another in some productive manner?

More correlative than causative

Bennett: Well, there is a strong relationship, especially on the infrastructure best practices that impact labor. I would treat it as correlative rather than causative, but as you ruthlessly simplify and standardize your environment, as you move to a common shared infrastructure, you actually can significantly reduce your management costs and begin the process of shifting your IT budget away from management and maintenance.

We see most customers spending 70 percent plus of their operational budget on management and maintenance, the opportunity is flipping that around to where they spend 70 percent of their operational budget on business projects. So, there is a strong set of benefits that come on the people side along with the energy side.

Now, for organizations that have green strategies in addition to having strategies for energy efficiency, one can use IT to help the organization be greener. Some very simple things are to make use of things like HP's Halo rooms for video conferencing and effective meetings without travel and to set up remote access with the corresponding security, so that people can work from home offices or work remotely. A lot of things can be done with green benefits as well as energy benefits.

Gardner: John, just briefly for our listeners, how do you distinguish green from energy conservation, what's the breakdown between them?

Bennett: Well, I am not sure how to characterize the breakdown, but energy is very typically focused either on reducing direct energy cost or reducing energy consumption.

A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.



The broader green benefits will tend to look at areas like sustainability, or having what some people refer to as a neutral carbon footprint. So, if you look at your supply chain backwards and out to your customers, you're not consuming as much of the earth's resources in producing your goods and services, and you are helping your people not consume resources needlessly in delivering the business services that they provide to their customers.

It's about just recycling practices, using recycled goods, packaging efficiency, cutting out paper consumption, changing business processes, and using digitization. A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.

Gardner: Ian, I have heard many times the issue around cable management come up in best practices as well. What's the relationship between energy and cable management in a complex data center environment?

Jagger: Cable management, as you say, is one of those best-practice areas. There are a couple of ways you can look at that. One is from the original plant design with respect to cable ducting and just being accurate with respect to the design of that.

Continuous operation

The second part is running an operation continuously. That operation is dynamic, and so it's never going to stand still. Poor practice starts to take over after a while, and what was once well-designed and perhaps tidy, is no longer the case The cables are going to run here and there, you move this and you move that, and so on. So, that best practice isn't sustained.

You can simply just move back in and just take a fresh look at that and say, "Am I doing what I need to be doing with respect to cabling?" It can have a significant impact, because cabling does interrupt the airflows and air pressures that are running underneath the raised floor.

It's simply a case of getting back to the best practice in terms of how it was originally designed with respect to cable management. There are products in there that we ourselves sell, not just from a design perspective, but racking products that enable that to happen.

Gardner: On the topic of good design, let's move to our fourth major area -- data center building and facility planning. This is for those folks who might not want to, but need to build a whole new data center Or, if they've got an issue where they want to consolidate numerous data centers into a single facility, they might think about moving one or replacing it. A lot of different scenarios can lead to this.

How about starting with you John Bennett? What do you need to consider, when you are going to this whole new facility? I would think the first thing would be where to put the thing -- where is the location.

One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.



Bennett: Actually, before you get to choosing the location, the real first question is, "What is the type of facility do you need?" Ian talked earlier about the hybrid data center concept, but the first questions are how big do you need and what does it have to be to meet and support the needs of the business? That's the first driver.

Then, you can get into questions of location. One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.

There are a lot of customers, for example, who have, and run, data centers downtown in cities like New York, Tokyo and London -- very expensive real estate, but it's important to the business to have their data centers near their corporate offices.

There are companies that run their data centers in remote locations. I know a major bank on the West Coast that runs their primary data centers in Iowa. You can have strategies for having regional data centers. I think that the Oracle data center strategy is to have data centers around the world, in three locations.

HP has its data centers, six data centers, three pairs, located in different parts of the United States, providing worldwide services.

Environmental benefits

You can choose to locate them at places that have environmental benefits, like geothermal benefits. We have a new data center that we are opening up in the UK, which is incredibly energy efficient -- perhaps Ian can talk briefly about that -- taking advantage of local winds. You can take advantages of natural resources from a power point of view.

Gardner: The common philosophy here is to be highly inclusive, bringing in as many aspects of impacting on the decision and long-term efficiency. This is what needs to take place top-down.

Bennett: There are a lot of factors at play. The priorities and weightings of those for individual customers will vary quite significantly. So all of those need to be taken into consideration.

If you are doing a new data center project, chances are this is something that is not just going to your CFO for approval, but probably to the board of directors. It's something that not only is going to have to have a business case in its own right, but have to meet the corporate hurdle rates and be viewed as an opportunity cost for the organization. These are very fundamental business decisions for many customers.

Gardner: Ian Jagger, when we look to these new facilities factoring in a much lower energy footprint that may not have been the case with older facilities might help make that decision and might prompt that board to move sooner than later.

But the play of climate on a data center and energy efficiency is truly significant.



Jagger: Right. Going to the point of actually where to locate it, some companies do have preferences for a data center to be located adjacent to where they are actually conducting business, That doesn't necessarily follow for everyone.

But the play of climate on a data center and energy efficiency is truly significant. We have a model within our Energy Efficiency Analysis that will model for our customers the impact of where a data center could be based, based on climate zone and the relative impact of that.

The statistics are out there in terms of breaking up climate zones into eight regions -- One being the hottest and Eight, the coldest -- and then applying humidity metrics on top of that as well. Just going from one to the other can double or even triple the power usage effectiveness (PUE) rating, which is the usage of energy to power IT over the total end users coming into the data center in the first place. Siting the data center can have an enormous impact on cost and efficiency.

Gardner: I imagine that your thoughts earlier about the PODs and the differentiation within the data center based on certain new high-level requirements. This could also now be brought to bear along with cabling when you are planning a new facility, something that you might not have been able to retrofit into an older one.

Rates of return

Jagger: It's easier for sure to design that into a new facility than it is to retrofit it to an old one, but that doesn't exclude applying the principle to old ones. You would just get to a point where you have a diminishing rate of return in terms of the amount of work that you need to do within an older data center, but certainly you can apply that.

The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.

Gardner: I wonder if there are any political implications around taxation, carbon footprint, and cap-and-trade types of legislation. Any thoughts about factoring location and new data centers in with some of those issues that also relate to energy?

Bennett: Certainly, there are. The UK, for example, already has regulations in place for new buildings that would impact a new data center design project. There is a Data Center Code of Conduct standard in the European Union. It's not regulation yet, but many people think that these will be common in countries around the world -- sooner rather than later.

Gardner: So, yet another indication that getting a full comprehensive perspective when considering these energy issues is very important.

The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.



Let's go back to examples. Do we have some instances where people have created entirely new data centers, done the due diligence, looked at these varieties of perspectives from an energy point of view, and what's been the result? Are there some metrics of success to look at?

Jagger: I think John spoke earlier about a data center we recently built in the UK. The specific site was on the Northeast coast of the UK. I know the area well.

Bennett: It sounds like you might Ian.

Jagger: The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.

We've designed data centers using geothermal activity. Iceland is a classic. Iceland sets itself up, as, "Come to us. Bring your data center to us, because we can take advantage of the geothermals that are in place with respect to that."

Examining all factors

To slightly argue against that, there are a number of data centers being sited in locations like Arizona, where you would consider the cost of cooling the data center to be much greater. Well, the humidity factor plays into that, because there is relatively low humidity there.

The other factor that's coming into that is how you work with the utility company and what the utility rates are? How much you are paying per kilowatt-hour for energy? Still other factors come into play, like general security with respect to the data center.

There are lots of instances where siting the data center is determined by the political considerations that you've talked about. It could be in terms of taking advantage of natural resource. It could be in terms of whether incentives are greater. There are many, many reasons. This would be part of any study, and the modeling that I talked about should take it all into account.

Gardner: So, clearly, there are many, many variables, a great deal of complexity of having a global perspective, and a great deal of experience certainly would come to be very productive when moving into this.

Jagger: Just to give you a specific example, we recently ran an analysis for a company based in Arizona. They were interested in understanding what the peer comparison would be for other companies in a similar climate zone -- how efficient were they in comparison to peers that they could correctly compare themselves to?

You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others.



You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others. What is it that you consider efficient? A data center with a PUE of 2 may be incredibly efficient, compared to a data center with a PUE of 1.4, based on climate location. In other words, the one with a PUE of 2 is actually more efficient than the one with 1.4, because of the influence of climate. If they were peer to peer, it would reflect that.

Gardner: How does an organization begin? We've talked about new data centers, modernization, virtualization, and refining and tuning best practices. Any thoughts on how to get started and where some valuable resources might reside?

Do you have a plan?

Jagger: To me, the only question would be whether you're improving efficiency according to a plan? Do you know the business benefit and the ROI of each improvement that you would like and that you would consider there? If you don't start at that point, you're going to get lost. So what is the plan that you are looking to do, and what is the business benefit that would follow that plan?

Bennett: That plan derives from having a data center strategy, in the positive sense of the word, which is understanding the business strategy and its plans going forward. It's understanding how the business services provided by IT contribute to that business strategy and then aligning the data centers as one of many assets that come into play in delivering those business services.

We see a lot of customers who have either very aged data center strategies or don't have formal data center strategies, and, as a result, aren't able to maximize the value that they deliver to the organization.

At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.



Jagger: You may have noticed this thing throughout this podcast from John and me, one of convergence or synchronization between IT and the facilities. I think that's apparent.

Don't necessarily focus on IT as a starting point. At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.

So, look at some of the other areas beyond IT itself. Those generally would be infrastructure areas.

You've also got to consider how you're going to measure this. How do you look at measuring your efficiency? Some level of energy automation and discovery of measuring energy that should be built in.

Gardner: So. that falls back into the realm of IT financial management.

Jagger: Right.

Gardner: We have been discussing ways in which you can begin realistically reducing energy consumption across data centers -- old data centers and new data centers -- and applying good practices, regardless of their age or location.

Helping us understand how to move in the conservative use of energy, we have been joined by John Bennett, worldwide director for Data Center Transformation Solutions at HP. Thank you, John.

Bennett: My pleasure Dana. Thank you.

Gardner: We've also been joined by Ian Jagger, worldwide marketing Manager for Data Center Services. Thank you, Ian.

Jagger: You are very welcome Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.