Showing posts with label John Bennett. Show all posts
Showing posts with label John Bennett. Show all posts

Wednesday, April 07, 2010

Well-Planned Data Center Transformation Effort Delivers IT Efficiency Paybacks, Green IT Boost for Valero Energy

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power and cooling requirement and concerns.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the huge drive for improvement around enterprise data centers. Many enterprises, if not nearly all, are involved nowadays with some level of data-center transformation either in the planning stages or in outright execution. The heightened activity runs the gamut from retrofitting and designing new data centers to then building and occupying them.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely green-field data centers -- with modern design and facilities -- are coming online.

These are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners, as well as the business leadership and line of business managers. The payoffs are potentially huge, as we'll see, from doing data center design properly, but the risks are also quite high, if things don't come out as planned.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.



Today, we're going to examine the lifecycle of data-center design and fulfillment through migration and learn about some of the payoffs when this goes as planned. We're going to learn more about a successful project at Valero Energy Corp. The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.

We're here with two executives from Hewlett-Packard to look at proper planning and data center design, as well as build and migration. And we'll learn from an IT leader at Valero how they managed their project.

Please join me in welcoming our guests today. We're here with Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP. Welcome to the show, Cliff.

Cliff Moore: Thanks, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Hello, John.

John Bennett: Hi, Dana.

Gardner: We're also here with John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. Welcome to the show, John.

John Vann: Hello, Dana. Thanks a lot.

Gardner: Let's go to you, John Bennett. Tell us why data center transformation is at an inflection point, where data centers are in terms of their history, and what are the new requirements. It seems to be somewhat of a perfect storm in terms of there's a need to move, and things still are really not acceptable?

Modern and efficient

Bennett: You're right on that front. I find it just fascinating that if you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point you made about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization, not only helping them shift their spending away from management and maintenance and into business projects and priorities, but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

The issues that organizations have in trying to address those mandates, of course, is that the legacy infrastructure and environments they have, the applications portfolio, the facilities, etc., all hinder their ability to execute on the things they would like to do.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

It turns out that one of the things that gets in the way, both from a cost perspective and from a supporting the business perspective is the data centers themselves. Customers can find themselves, as HP did, having a very large number of data centers. We had 85 around the world, because we grew through acquisition, we grew organically, and we had data centers for individual lines of business.

We had data centers for individual countries and regions. When you added it up, we had 85 facilities and innumerable server rooms, all of them requiring administrative staff, data center managers, and a lot of overhead. As part of our own IT transformation effort, we've brought that down to six.

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. One wag has characterized this as the "$15 million server," where you keep needing to grow and support the business. All of a sudden, you discover that you're bursting at the themes.

Or, you can be in California or the U.K. The energy supply they have today is all they’ll ever have in their data center. If they have to support business growth, they're going to have to deal it by addressing both their infrastructure strategies, but probably also by addressing their facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

Gardner: John, it also strikes me that the timing is good, given the economic cycle. The commercial market for land and facilities is a buyer's market, and that doesn’t always happen, especially if you have capacity issues. You don’t always get a chance to pick when you need to rebuild and then, of course, money is cheap nowadays too.

Bennett: If you can get to it.

Gardner: The capital markets are open for short-intervals.

Signs of recovery

Bennett: We certainly see, and hope to see, signs of recovery here. Data center location is an interesting conversation, because of some of the factors you named. One of the things that is different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. Of course, you might have volcano risk.

You have people who are attracted to very boring places, like the center of the United States, which don't have earthquakes, hurricanes, wildfires and things that might affect facilities themselves. Or, as I think you'll discover with John at Valero, you can choose to build the data center right near corporate headquarters, but you have a lot of flexibility in it.

The issue is not so much access to capital markets as it is that any facility’s project is going to have to go through not just the senior executives of the company, but probably the board of directors. You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

So, it's a good time from the viewpoint of land being cheap, but it might be a good time in terms of business capital be available. It might not be a good time in terms of investment funds being available, as many banks continue to be reluctant to loan than it appears.

The majority of the existing data centers out there today were built 10-15 year ago, when power requirements and densities were lot lower.



Gardner: The variables now for how you would consider, plan, and evaluate are quite different than even just a few years ago.

Bennett: It's certainly true, and I probably would look to Cliff to say more about that.

Gardner: Cliff Moore, what's this notion of fit-for-purpose, and why do you think that variables for deciding to move forward with data center transformation of redesigned activities is different nowadays? Why we are in a different field, in terms of decisions around these issue?

Moore: Obviously, there's no such thing as a one-size-fits-all data center. It's just not that way. Every data center is different. The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

No growth modeling

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. So that's a major consideration. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical as well.

Gardner: We should explain the notion of fit for purpose upfront for those folks who might not be familiar with it.

Bennett: With fit for purpose, the question in mind is the strategic one of the data center strategy for an organization in particular. If you think about the business services that are being provided by IT, it's not only what those business services are, but how they should be sourced. If they’re being provided out of entity-owned data centers, how many and where? What's the business continuity strategy for those?

It needs to take into account, as Cliff has highlighted, not only what I need today, but that buildings typically have an economic life of 15 to 25 years. Technology life cycles for particular devices are two or three years, and we have ongoing significant revolutions in technology itself, for example, as we moved from traditional IT devices to fabric infrastructures like converged infrastructure.

You have these cycles upon cycles of change taking place. The business forecasts drive the strategy and part of that forecasting will be sizing and fit for purpose. Very simply, are the assets I have today capable of meeting my needs today, and in my planning horizon? If they are, they’re fit for purpose. If they’re not, they’re unfit for purpose, and I'd better do something about it.

Gardner: We're in a bit of a time warp, Cliff. It seems that, if many were built 15 years and we still don't have the sense of where we'll be in 5 or 10 years, we seem to be caught between not fitting into the past but not quite fitting or knowing what the future is. How do you help people smooth that out?

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business.



Moore: Obviously, we’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well. This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

Gardner: What does HP bring to the table? How do you start a process like this and make it a lifecycle, where that end goal and the reduce risk play out to get the big payoffs that those boards of directors are interested in?

Moore: Well, our group within Critical Facilities Services actually comes to the table with company's executives and not only looks at what are their space, power, and cooling requirements, but what are the strategies of the business. What are the criticality levels of the various mission-critical applications that they run? What are their plans for the future? What are their mergers and acquisitions plans, and so on and so forth. We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

Gardner: It was pointed out earlier that one size doesn't fit all. From your experience, Cliff, what are the number one or two reasons that you’re seeing customers go after a new design for the data center, and spend that large sum of money?

Power and cooling

Moore: Probably the biggest reason we're seeing today is power and cooling. Of course, cooling goes along with power. We see more of that than anything else. People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Gardner: So higher density requires more energy to run the servers and more energy to cool them, but you have a higher efficiency, utilization, and productivity as the end result, in terms of delivering on the requirements. Is there a way for designing the data center that allows you to cut cost and increase capacity or you are asking too much of this process?

Moore: There certainly are ways to do that. We look at all of those different ways with the client. One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment. It was mentioned that Iceland might be a good location. You'd get a lot of free cooling there.

Gardner: What are some of the design factors? What are the leading factors that people need to look at? Perhaps, you could start to get us more familiar with Valero and what went on with them in the project that they completed not too long ago.

Moore: I'll defer to John for some of that, but the leading factors we're seeing again are our space, power, and cooling, coupled with the tier level requirement. What is the availability requirement for the facility itself? Those are the biggest factors we're seeing.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run.



Marching right behind that is energy efficiency. As I mentioned before, the cost of energy is exorbitant, when it comes to running a data center. Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive, as I'm sure John would tell you. One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own.

Gardner: Before we go to Valero, I have one last question on the market and some of the drivers. What about globalization? Are we seeing emerging markets, where there is going to be many more people online and more IT requirements? Is that a factor as well?

Moore: Absolutely.

Bennett: There are a number of factors. First of all, you have an increasing access of the Internet and the increasing generation of complex information types. People aren't just posting text anymore, but pictures and videos. And, they’re storing those things, which is feeding what we characterize as an information explosion. The forecast for storage consumption over the next 5 to 10 years is just phenomenal.

Perfect storm

On top of that, you have more and more organizations and businesses providing more of their business services through IT-based solutions. You talked about a perfect storm earlier with regard to the timing for data centers. Most organizations are in a perfect storm today of factors driving the need for ongoing investments and growth out of IT. The facilities have got to help them grow, not limit their growth.

Gardner: John Vann, you’re up. I'm sorry to have left you off on the sidelines there for so long. Tell us about Valero Energy Corp., and what it is that drove you to bite off this big project of data-center transformation and redesign?

Vann: Thanks a lot, Dana. Just a little bit about Valero. Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space. As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Gardner: What, in your opinion, was more impactful -- the planning the execution, the migration? I guess the question should be, what ended up being more challenging than you expected initially? Where do you think, in hindsight, you’d put more energy and more planning, if you had to do it all again?

Solid approach

Vann: I think our approach was solid. We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

If we had one thing to do over again, we would probably change the way we did our IP renumbering. That was a very complex exercise, and we didn’t start that soon enough. That was very difficult.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Power and cooling are just becoming an enormous problem.



Gardner: Thank you for sharing that. How old was the data center that you wanted to replace?

Vann: It's about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

Gardner: Do you agree, John, that some of the variables or factors that we discussed earlier in the podcast have changed, say, from just as few as six or seven years ago?

Vann: Absolutely. Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new one really put us in the forefront.

Gardner: Can you give us some sense of the metrics now that this has gone through and been completed? Are there some numbers that you can apply to this in terms of the payback and/or the efficiency and productivity?

Potential problems

Vann: Not yet. We've seen some recent things that have happened here on campus to our old data center, because of weather and just some failures within the building. We’ve had some water leaks that have actually run into the data center floor. So that's a huge problem that would have flooded our production data center.

You can see the age of the data center beginning to have failures. We've had some air-conditioner failures, some coolant leaking. I think our timing was just right. Even though we have been maintaining the old data center, things were just beginning to fail.

Gardner: So, certainly, there are some initial business continuity benefits there.

Vann: Exactly.

Gardner: Going back to Cliff Moore. Does anything you hear from John Vann light any light bulbs about what other people should be considering as a step up to the plate on these data center issues?

Moore: They certainly should consult John's crystal ball regarding the issues he's had in his old data center, and move quickly. Don’t put it off. I tell people that these things do happen, and they can be extremely, costly when you look at the cost of downtime to the business.

You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications.



Gardner: Getting started, we talked about migration. It turns out that we did another podcast that focused specifically on data-center migration and we can reference folks to that that easily. What is it about planning, getting started as you say, when people recognize that the time might not be on their side? What are some of the initial steps, and how might they look to HP for some guidance?

Moore: We focus entirely on discovery early on. You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications, especially the mission-critical applications.

Typically, a move like John’s requires multiple, what we call, move groups. John’s company had five or six, I believe. You simply cannot divide your servers up into these move groups, without knowing what you might break by dividing them up. Those dependencies are critical, and that's probably the failing point.

Vann: We had five move groups. Knowing what applications go with what is a real chore in making sure that you have the right set of servers you can move on a particular weekend. We also balanced it with downtime from the end customers, so we’re going to make sure that we were not in the middle of a refinery turnaround or a major closing. Being able to balance those weekends, so we had enough time to be able to make the migration work was quite a challenge.

Gardner: John Vann, did you take the opportunity to not only redesign and upgrade your data center facilities, but at the same time, did you modernize your infrastructure or your architecture? You said you did quite a bit with virtualization already, was this a double whammy in terms of the facilities as well as the architecture?

Using opportunities

Vann: Yes. We took the opportunity to upgrade the network architecture. We also took the opportunity to go further with our consolidation. We recently finished moving servers from refineries into San Antonio. We took the opportunity to do more consolidation and more virtualization, upgrade our blade farm, and just do a lot more work around improving the overall infrastructure for applications.

Gardner: I'd like to take that back to John Bennett. I imagine you're seeing that one of the ways you can rationalize the cost is that you're not just repaving a cow path, as it were. You're actually re-architecting and therefore getting a lot greater efficiency, not only from the new facility, but from the actual reconstruction of your architecture, or the modernization and transformation of your architecture.

Bennett: There are several parts to that, and getting your hands around it can really extend the benefits you get from these kinds of projects, especially if you are making the kind of investment we are talking about in new data center facilities. Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



John talked about dealing with the IP addresses, but the physical networking infrastructure and a lot of old data centers is a real hodgepodge that's grown organically over years. I guess you can blame some of our companies for having invented Ethernet a long time ago. But, it's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost. They all come in there.

I actually have a question for John Vann as well. Because they had a pretty strong focus around governance, and especially in handling change request, I'm hoping he might talk a little bit about that process of the design and build project.

Vann: Our goal was to hold scope creep to a minimum. We had an approval process, where it had to be a pretty good reason for a change and for a server not to move. We fundamentally used the word "no" as much as we could to avoid not getting the right applications in the right place. Any kind of approval had to go through me. If I disagreed, and they still wanted to escalate it, we went to my boss. Escalation was rarely used. We had a pretty strong change management process.

Gardner: I can see where that would be important right along the way, not something you want to think about later or adding onto the process, but something to set up right from the beginning.

We’ve had a very interesting discussion about the movement in enterprise data centers where folks are doing a lot more transformation, moving and relocating their data centers, modernizing them, and finding ways to eke out efficiencies, but also trying to reduce the risk of moving in the future and looking at those all important power and energy consumption issues as well.

I want to thank our guests. We've been joined today by Cliff Moore, America’s PMO Leads for Critical Facilities Consulting at HP. Thank you, Cliff.

Moore: Thanks, Dana. Thanks, everybody.

Gardner: John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Thank you, John.

Bennett: Thank you, Dana.

Gardner: And lastly, John Vann, Vice President, Technical Infrastructure and Operations at Valero Energy. John, I really appreciate your frankness and sharing your experience and I will certainly wish you all in that.

Bennett: Thank you very much, Dana, I appreciate it.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power requirement and cooling concerns. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Thursday, February 11, 2010

Smart Grid for Data Centers Better Manages Electricity to Slash IT Energy Spending, Frees-Up Wasted Capacity

Transcript of a BriefingsDirect podcast on implementing energy efficiency using smart grids in enterprise data centers to slash costs and gain added capacity.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on gaining control over energy use and misuse in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Nowadays, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It’s a lifecycle approach from the data centers to full automation benefits. Automation software for capacity planning and monitoring has been designed and improved to best match long-term energy needs and resources in ways that cut total cost, while gaining the truly available capacity from old and new data centers.

These so-called smart grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. Done well, such solutions as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity by 30-50 percent just by gaining control over energy use and misuse.

We're here today with two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Please join me in welcoming Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP. Welcome Doug.

. . . The drivers behind data center transformation are customers who are trying to reduce their overall IT spending . . .



Doug Oathout: Thank you, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Welcome back to the show, John.

John Bennett: Thank you very much, Dana. Glad to be here.

Gardner: John, let me start with you, if you don’t mind. Let’s set up a little bit of the context for this whole energy lifecycle approach. It’s not isolated. It’s part of a larger set of trends that we loosely call data center transformation (DCT). What’s going on with DCT and how important is the role these energy conversation approaches play?

Bennett: DCT, as we’ve discussed before Dana, is focused on three core concepts, and behind it, energy is another key focus for that work. But, the drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects, business priorities, and innovation in support of the business and business growth.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT. The issues that customers have in executing on this, of course, is that the facilities, their people, their infrastructure and applications, everything they are spending and doing today -- if they don’t change it -- can get in the way of them realizing these objectives.

Data center strategy

So, DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

More recently, we in the industry have been focused on the infrastructure and layout of the data center. Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

I think the best control of energy is probably better described as built-in and not layered on.



Gardner: You know, John, it’s interesting. When I hear you describe this, it often sounds as if you are describing security. I know that sounds odd, but security has some of the same characteristics. You can’t look at it individually. It needs to be taken in as a comprehensive view that there are risks associated, and that it becomes management-intensive. Maybe we can learn from the way in which people approach security. Perhaps, they should also be thinking along similar lines when they approach energy as a problem?

Bennett: That’s an interesting analogy, and the point I would add to that, Dana, is that the best security is built-in, not layered on. I think the best control of energy is probably better described as built-in and not layered on.

Gardner: Let’s go to Doug. Doug. Tell me what the problem is out there. What are folks facing and how inefficient are their data centers really? What kind of inefficiency is common now?

Oathout: Dana, what we're really talking about is a problem around energy capacity in a data center. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Problem area

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

The way to do that is to get the envelope cleared up so we know how much is left. What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Gardner: John, we’ve already done a podcast on converged infrastructure, and I don’t want to belabor that point too much, but it strikes me that going about this data center energy exercise in alignment with a converged-infrastructure approach would make a lot of sense. We're starting to see commonality in ways we hadn’t seen before.

Bennett: There’s very strong commonality there, and I’ll ask Doug to address that in a minute. When I described the best energy solution as being built-in, that really captured the essence of what we're doing with converged infrastructure. It’s not only integrating the elements of the data center, but better instrumenting them from a management and automation perspective. One of the key drivers for making management and automation decisions about provisioning and workload locations will be energy cost and consumption. Doug?

Oathout: Converged infrastructure is really about deploying IT in the optimal way to support a workload. We talk about energy and energy management. You're talking about doing the same thing. You want to deploy that workload to a server and storage networking environment that will do the amount of work you need with the least amount of energy.

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

An example of this is identifying what type of application you want to run on your infrastructure and then deploying the right amount of resources to run that application. You're not deploying more and not deploying less, but deploying the optimal amount of resources that you know that you are getting the best productivity for the energy budget that you have.

Adding resources

As that workload grows over time, you have the capability built into the software and into the monitoring, so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You want to keep it optimal over time. You don’t want to set up silos to start. You don’t want to set up over-provisioning to start. You want to be able to optimally run your infrastructure long-term. Therefore, you must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Gardner: Another trend in the data center nowadays is moving toward shared-services approaches, viewing yourself as a service provider, and billing based on these workloads and on the actual demand. It seems to me that energy needs to fit into that as well. Perhaps, as we think about private cloud, where we’ve got elasticity of resources, energy needs to be elastic, along with the workload allocation. So, quickly, John, what about the notion of shared services and how energy plays into that as well as this private cloud business?

Bennett: It definitely plays, as both you and Doug have highlighted. As one moves into a private cloud model, it accentuates the need to have a better real-time perspective of energy consumption and what devices consume and are capable of, in order to manage the assets of the private cloud efficiently and effectively. Whether you have a private cloud, providing a broader set of services, you clearly want to minimize your own cost structures. That's going to be for good energy management as well as other items. Doug?

Oathout: Yeah. With the private cloud implementation and how a converged infrastructure would support that is that you want to bring the amount of resources you need for an application online, but you also want to be able to have the resources available to run a separate set of applications and bring that on line as well.

The living and breathing of a data center is really what we're talking about with a private-cloud infrastructure on a converged infrastructure.



You're managing a group of resources as a pool, so that over time you can manage up resources to run a particular application and then manage them down and put the resources back into pool, so they can be deployed for another application.

The living and breathing of a data center is really what we're talking about with a private-cloud infrastructure on a converged infrastructure. That living and breathing capability is built within the processes and within the infrastructure, so that you can run applications in an optimal way.

Gardner: It's my understanding that some of the public-cloud providers nowadays have built their infrastructure with conservation in mind, because every penny counts when you're in a lower-margin shared service and providing services business. They can track every watt. They know where it's all going. They’ve built for that.

Now, what about some of these older organizations, five years plus? What can be done to retrofit what's out there to be more energy efficient? How does this work toward the older sets?

Oathout: The key to that, Dana, is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going. You can easily do that through a facility survey or a facility workshop, but the other thing you want to look at is your IT. As you’re upgrading your IT, all the new IT equipment -- whether it be servers or storage or networking -- has power management built into it and has reporting built into it.

Collect information

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

It's really using a process or an assessment to figure out how much energy you're using and where it's going and then deploying to this newer equipment with all the instrumentation built in, along with software to understand where your energy is going.

It's the way to get started but it's also the way to keep yourself in an automated way or keep yourself optimizing over time. You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Gardner: As I understand it now, we're talking about an initial payback, which would be identifying waste, hotspots, and right cooling approaches, getting some added capacity as a result, while perhaps also cutting cost. But, over time, there's a separate distinct payback, which is that you can control your operational costs and keep them at a lower percentage of your total cost of IT spend. Does that sound about right?

Oathout: That is right, Dana. You can actually decrease the slope of the energy curve. The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.

Over time, if you implement more efficient IT, you can actually decrease that slope to something much less than 11 percent growth. Also, as you increase your capacity in your data center in the same power envelope, you could actually start getting a much more efficient infrastructure running in the same power envelope, so you're actually getting to run that IT equipment for free energy, because you’ve freed up that energy from something else.

The idea of decreasing the slope or decreasing your budget is the start, but long term you're going to get more workload for the same budget. You can say the same thing for the IT management budget as well. You're trying to is get more efficiency out of your IT and out of your energy budget to support future workloads.

Gardner: And, the insight that you gain from implementing these sensors and tracking and automation, the ability to employ capacity-planning software, can bring out some hard numbers that allow you to be more predictable in understanding what your energy requirements will be, regardless of whether you are growing, staying the same, or even if you need to downsize your company.

Those numbers, that visibility, is something that can be applied to other assets allocations and important decisions in the enterprise around such things as perhaps carbon taxes and caps, as well as facilities, and even thinking about alternative energy sources.

Different approaches

Oathout: There are a lot of different ways to use green IT. We’ve seen customers implement a consolidation of infrastructure. They took a number of servers, a number of facilities associated with that server and storage environment, and minimize it down to a level that was very useable.

It gave the same service-level agreement (SLA) to their lines of businesses and they received energy credits from governments. They could then use those energy credits for monetary reasons or for conservation reasons. We also see customers, as they do these environmental changes or policies, look for ways that they can better demonstrate to their clients that they are being energy aware or energy efficient.

A lot of our clients use consolidation studies or energy efficiency studies as ways to show their clients that they are doing a very good job in their infrastructure and supporting them with the least possible environmental impact.

We see customers getting certificates, but also using energy consumption reductions as a way to show their clients that they’re being green or being environmentally friendly, just the same as you'd see a customer looking at a transportation company and how energy efficient they are in transporting goods. We see a lot of clients using energy efficiency in multiple ways.

Gardner: We've talked about Smart Grid for Data Centers several times. Now, let's drill down and describe exactly what it is. What are we talking about? What is HP offering in this category?

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.



Oathout: Smart Grids for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.

Gardner: John, I'm going to grasp for another analogy here, it sounds like, once again, we're up against governance. It's an important concept and topic, when it comes to how to properly do IT, but now we are applying it to energy.

Bennett: That's just the reflection of the fact that for any organization looking to get the most value out of their IT organization, their infrastructure, and operations they need to address governance, as much as they need to address the business services they're providing, as much as they need to address the infrastructure with how they deliver it and how they manage things like energy and security in that environment. It's all connected then.

Gardner: I wonder if we have any examples of how this has worked in practice. Within HP, itself, I assume that you want to cut your energy bills as much as anyone else does, particularly in a down economy or when a growth pattern hasn’t quite kicked in fully. Are there any examples within HP or some customers or clients that you have worked with?

Oathout: In the HP example, our IT organization has gone from 85 data centers down to 6. They've actually reduced the amount of budget we spent on IT from about 4 percent of our overall P&L down to about 2 percent. So, they've done a very good job consolidating and migrating the workload to a smaller set of facilities and a smaller set of infrastructure.

They're getting a huge floor saving capacity back, but are also getting a power saving of 66 percent, versus where they were two years ago.



They're now in the process of automating all that, so long term we will have a much more predictable IT workload from an energy perspective. They're implementing the software to control the energy. They're implementing power capping. They're implementing a converged infrastructure, so they have the ability to share resources amongst application. HP IT has really driven their cost down through this.

We have another example with the Sisters of Mercy Health System, which did a very similar convergence of infrastructure on a smaller scale. In their data center, they freed up 75 percent of their floor space by doing server consolidation, storage consolidation, and energy management. They now have 25 percent of the footprint they used to have from a server-storage physical standpoint, but they are also only using about 33 percent of the energy they used to use within their environment.

So, they're getting a huge floor saving capacity back, but are also getting a power saving of 66 percent, versus where they were two years ago. By doing this converged infrastructure, by doing consolidation, and then managing and capping the IT systems, they’ve got a much more predictable budget to run their IT infrastructure.

Gardner: I suppose getting started is a tough question, because you could get started so many different ways and there is such wide variability in how data centers are constructed and how old they are and what the characteristics are. I almost know the answer to this question so many different ways -- but how do you get started, depending on what your situation is at this particular time?

Efficiency analysis

Bennett: For many customers, if they're struggling to understand where energy is being consumed and how it's being used, we will probably recommend starting with an energy efficiency analysis. That will not only do a thorough evaluation of both the facility and the infrastructure, but provide insight into the kind of savings you can expect from the different types of investment opportunities to reduce energy costs. That’s the general starting point, if you are trying to understand just what’s going on with the energy.

Once you understand what you are doing with energy, then you can dive into looking at a Smart Grid for Data Center solution itself as something to take you even further. Doug, how do you get started with that?

Oathout: Another way to get started, John, is deploying new IT infrastructure. Our ProLiant servers, our Integrity servers, or our storage products have the instrumentation and the monitoring all built into the infrastructure. Deploying those new servers or storage environments allow you to get a picture of how much energy is being used by those, so you can have more predictable power usage going forward.

Customers are using virtualization. Customers are trying to get utilization of the servers and storage environment up to a very efficient level. Having the power management and the energy monitoring being built into those systems allows them to start laying out how much infrastructure they can support in their data center.

One of the keys for us is to start deploying the new pieces of HP IT equipment, which are fully instrumented and energy efficient. You'll have the snapshot of actual power consumption, and, if you upgraded your IT facilities over a longer period of time, you can get a full snapshot of your infrastructure. You can actually increase the capacity of the data center just by deploying the new products that are much more efficient than the ones three or four years ago.

There are places in the world, such as the UK or California, where the power you have coming into your facilities is all the power you are ever going to have. So, you really have to manage inside of that type of regulatory constraint.



Bennett: That’s a good example of this integrated roadmap idea behind DCT. I characterize it as modernization, consolidation, and virtualization. Really
it's, stepping up the capabilities or their infrastructure to both reduce cost, improve efficiencies, improve quality of service, and reduce the energy costs.

As Doug highlighted, after that phase of work is done, you've laid the ground work to consider taking advantage of that from an instrumentation and management point of view. You can augment that with further instrumentation of the racks and the data center resources in order to really implement a complete Smart Grid for Data Center solution. It's a stepping stone. It leverages the accomplishments done for other purposes to take you further into a good efficient operation.

Gardner: Based on some of the capacity improvements and savings, it certainly sounds like a no-brainer, but I have to imagine, John, that in the future, it's going to become less of an option and something that’s essentially mandatory.

An 11 percent annual growth in energy cost is not a sustainable trajectory. We have to expect that energy costs will be volatile, but, perhaps, over time more expensive, whether in real terms or when you factor in the added cost of taxation, carbon taxes and caps, and what have you. So, this is really something that has to be done. You might as well start sooner than later.

Bennett: Yes. And, regulations and governance from outside agencies is currently an issue. There are places in the world, such as the UK or California, where the power you have coming into your facilities is all the power you are ever going to have. So, you really have to manage inside of that type of regulatory constraint.

We have voluntary programs. Perhaps the most visible one is the European Data Center Code of Conduct, and clearly we expect to see more regulation of IT and facilities, in general, moving forward. Carbon reduction mandates impacting organizations are going to be external drivers behind doing this. Of course, if you get your hands ahead of the game, and you do this for business purposes, you will be well set to manage that when it comes.

Gardner: We've been talking about how to gain control over energy use and perhaps misuse in enterprise data centers. We were talking about how a Smart Grid approach, a comprehensive approach, using the available data to start creating capacity management capabilities, makes a tremendous amount of sense.

I want to thank our guests on this discussion. We've been joined by Doug Oathout,Vice President of Green IT Enterprise Servers and Storage at HP. Thank you, Doug.

Oathout: Thank you, Dana.

Gardner: We've also been joined by John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Thanks again, John.

Bennett: My pleasure, Dana. Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast. Thanks very much for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on implementing energy efficiency using smart grids in enterprise data centers to slash costs and gain added capacity. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, February 08, 2010

Converged Infrastructure Approach Paves Way for Improved Data Center Productivity

Transcript of a sponsored BriefingsDirect podcast on achieving cost control and increased utilization through coordinated design, open standards and application-specific infrastructure.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on improved data center productivity for a natural progression toward converged infrastructure. Many enterprise data centers have embraced a shared service management model to some degree, but converged infrastructure applies the shared service model more broadly to leverage modular system design and open standards, as well as to advance proven architectural frameworks.

The result is a realignment of traditional technology silos into adaptive pools that can be shared by any application, as well as optimized and managed as ongoing services. Under this model, resources are dynamically provisioned efficiently and automatically, gaining more business results productivity. This also helps rebalance IT spending away from a majority of spend on operations and more toward investments, innovations, and business improvements.

We're here to explore the benefits of a converged infrastructure approach and to better understand the challenges of attaining a transformed data center environment. We'll see how converged infrastructure provides a stepping stone to private cloud initiatives. But, as with any convergence, there are a lot of moving parts, including people, skills, processes, services, outsourcing options, and partner ecosystems.

We're here with two executives from Hewlett-Packard (HP) to delve deeply into converged infrastructure and to learn more about how to get started and deal with some of the complexity, as well as to know what to expect as payoff. Please join me in welcoming our guests today. We're here with Doug Oathout, Vice President, Converged Infrastructure at HP Storage, Servers, and Networking. Welcome to the show, Doug.

Doug Oathout: Thank you, Dana. Happy to be here.

Gardner: We're also here with John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Welcome back to BriefingsDirect, John.

John Bennett: Thank you very much, Dana. Glad to be with you again.

Gardner: Let me start with you, John. We're talking about some pretty big subjects. There's a lot to chew on here. Data-center transformation (DCT), I suppose, is the most general topic to approach and then to delve down more deeply. What do we mean nowadays by DCT? How does HP define it, and how does that relate to some of the business issues that IT folks are grappling with?

Not one-size-fits-all

Bennett: DCT helps customers implement a data center and infrastructure strategy that's aligned to their goals and objectives. The key here is that it's customer-driven, and it has to be built around the plans and directions of the targeted organization. This is clearly not a one-size-fits-all type of environment.

For many organizations, those strategies for infrastructure can include traditional shared infrastructure solutions or servers using virtualization and automation with shared storage environments. Increasingly, we've seen a natural evolution into a tighter integration of the capabilities and assets of the data center in the fabric infrastructure.

HP's Converged Infrastructure represents a pretty significant step forward in terms of benefits and capabilities for customers looking at having infrastructure strategy aligned to their future needs. The neat thing is that converged infrastructure can be the foundation for private cloud architectures.

Whichever combination of these fits a particular customer, there is the practical challenge of how to change from where you are today to having that implemented? That's what DCT is really about, because it helps implement these strategies through an integrated roadmap of data center projects by consolidation, energy efficiency initiatives, and technology initiatives, like virtualization and automation.

Each of these has its own short-term benefits and returns, but collectively the results get compounded over time, delivering the kind of benefits that we traditionally talk to with DCT. This is all in response to what's going on in customer environments.

I often think of many CIOs as being at the heart of a vise, where, on one side, they have the business pressures. They need to support growth. They need to do a faster job of creating acquisitions. They need to spend more on business projects and innovation. They need to exploit technology for business advantage. They need to reduce costs.

On the other side of the vise are the constraints that they have in the environment that get in the way of them successfully addressing the business needs -- legacy infrastructure and applications and antiquated methods of managing the infrastructure that make it difficult to be responsive to change, or people with the skills that won’t serve modern technology's needs or environments.

Facilities and data centers that were designed and built even five years ago might not have the energy and capacity to support current infrastructure environments. Then, of course, there's energy cost.

So, the CIO is at the heart of this vise. I like to think of DCT and converged infrastructure as kind of the yellow brick road and the Emerald City, where converged infrastructure is Emerald City. It's where you want to get to. DCT is the yellow brick road. It's how you get there, and they complement each other quite nicely.

Gardner: Doug, help me understand why this is important now. The way John is describing it, it seems that the same old approach just won’t hold up, that the trajectory of data centers is unsustainable, whether it's through cost, energy, or capacity issues.

It's not clear to me yet why this converged infrastructure is the right thing to do in totality. Are we talking about a rip-and-replace or are we talking about a gradual direction? Help me understand why, if you are going to move in that direction, you should start now.

Cutting innovation

Oathout: There is a major economic situation going on right now, Dana. As you said earlier, about two-thirds, if not 70 percent, of the operations budget is spent on maintaining the IT and the IT workload within the data center.

When you have a recession, like we just experienced, what happens is that 30 percent spent on innovation or new workload placement gets cut immediately to help manage the budget within an organization. Therefore, in the last 18 months, very little innovation and few new projects were taken on by IT to support new business growth.

Converged infrastructure is important now because we have customers who are starting to spend again and who are starting to see the light at the end of the tunnel. They want their IT environment to be more flexible in the future. So, they're looking at their server and storage upgrades, and how they can implement converged infrastructure, so that the new infrastructure is more flexible and can adapt more to the requirements of the business.

Let me give you an example. A server consolidation using virtualization and new server equipment will generally double or triple your capacity within your data center for the same footprint, just by getting the utilization of the servers up, better performance within the servers, and better capabilities within virtual environments. You can basically double or even triple the size of your capacity within your data center.

As you're going through your technology refresh now, coming out of the recession, you can start implementing better and faster IT equipment.



The same thing holds true for storage. Storage disk drives become twice as dense over a two or three year period. The performance of the drives gets better. So, for the same footprint in your data center you can actually fit twice as much storage.

As you're going through your technology refresh now, coming out of the recession, you can start implementing better and faster IT equipment. You can also use better and more efficient processes -- virtualization, automation, and management. When you put those pools of resources in place, you put them in a virtual environment so they can be shared among applications or can be transferred among applications when needed.

You are in the process now of creating pools of resources, versus dedicated silo resources, like you had prior to the recession, which couldn’t be reused for some of the application, and therefore you couldn’t support business growth.

The opportunity now is to break down those silos, give our customers the ability to share resources in the same footprint they have today, and actually become more efficient, so that when business changes or business needs change, they can adapt to the requirements of the business.

Gardner: So, clearly it's efficiency and better balance between supply and demand of resources, and then being able to apply those resources dynamically with a shared service model. All sounds very well and good. What are the hurdles? What's preventing people from getting to this vision?

Resilient and optimized


Oathout: The big hurdles to get over are the application managers themselves. Line of business comes to the applications team and say they need a SAP deployment or an Oracle deployment, and they tell them what hardware to put it on. In a converged infrastructure environment, you really don’t want to care about the infrastructure you are putting it on. What you want to care about is that it's resilient, it's optimized, and it's modular, so it can grow and shrink with the application's demand.

What you really have is a process change that's required between the IT application managers, the test and development people, and a team that actually runs the infrastructure. They need to talk more about standardization. They need to talk about how their IT comes together.

That's where the Data Center Transformation Workshop that John's team does helps. It gives you an architecture for future deployments, so that you have a converged infrastructure. You have pools of resources to put new applications down or revamp older applications onto a newer architecture, so it becomes more flexible.

You have to break down that silo or break down that fence between application deployments and what line of businesses are telling the application deployers and the people who run the infrastructure. Customers really do see that as a deployment barrier, but they're working through it, because there are significant benefits on the other side, just due to the fact that you increase agility, lower cost, and you have more money and more people to go do the innovation to support the workloads of future businesses.

Gardner: John, it sounds as if we're asking people, in a sense, to rethink things a little bit. Typically, as Doug pointed out, you start with the application set and you deploy it, and then you figure out the best way to operate that over time. We are trying to flip that on its head, thinking about what the operational outcome should be, and then go about applying those applications in the right fashion. Is that fair?

So whether it's 2010 or 2003 or 1992 or mini computers back in 1975, rethinking IT is a very healthy practice, but it always has to be aligned to the question of what the organization and business need.



Bennett: Well, I would suggest that good organizations are always rethinking IT. What are the organization's strategy, goals, and objectives? What is it going to take to realize those objectives? What capabilities do we need from IT in order to make those real? And then, how do we make them happen?

So whether it's 2010 or 2003 or 1992 or mini computers back in 1975, rethinking IT is a very healthy practice, but it always has to be aligned to the question of what the organization and business need.

We also have the question of how it can be exploited for benefit. This is where the partnership between the technology team and the business team comes into play. The technology team will have more insights into how it can be exploited, and the key thing for the business is to make sure they specify their needs and not specify the answer.

Doug characterized it very well, when he said the SAP team wants a new deployment and they tell you what to put it on. The moment you do that, you're losing any of the advantages of a converged infrastructure.

Gardner: As you point out, rethinking IT has been happening for quite some time. We really don't have the luxury of standing still for very long in this industry. On the other side of the equation, you need to have a business or financial rationale to create that change in addition to having the vision of where you would like to go.

So, is there a business case, a rationale, a economic formula of some sort that HP is reflecting about -- how to go to those people who control the purse strings in order to move in this direction of a converged infrastructure?

Business plan in play

Bennett: There clearly is a business plan in play here. A lot of the benefits of this are in the nature of cost savings -- the consolidation, modernization, and virtualization that Doug spoke to -- the savings from energy related projects and investments with Data Center Smart Grid, for example. All are easily quantifiable.

Other benefits have financial benefits too. There's economic return to the organization from being able to roll out a new business service more quickly. There's an economic return to the business from being able to provision more resources when they are needed based on demand, so that demand doesn't disappear. There's a competitive business benefit, which is financial in nature, in being able to respond to competitive threats more quickly.

The business case for transformation and the business case for a converged infrastructure should be constructed, and it's the best way to get the buy in from senior executives.

Technologists playing with toys is not a compelling argument for investment by a business. Technologists making significant investments to make sure that IT is aligned to the needs of the business and having the business case for it is a great way to get approval to go ahead.

Gardner: Doug, when we think about a shared services model and a natural progression toward more of a converged infrastructure that borrows from that shared services mentality, how do we move into this as a manageable progression? How do we avoid that thinking about a rip and replace or a massive disruption or throwing of a switch? How is this applied in terms of a managed process or a progression or evolution?

It also has a very quick payback, because basically you're getting back 30 percent of your disk, which was over-provisioned, and now can be used.



Oathout: The way you go back and look at your infrastructure is very important, Dana. Let's just take the storage environment for a moment. You have a number of storage environments. They could either be direct-attached storage (DAS), network-attached storage (NAS), or storage area networks (SANs).

All these environments have performance application service-level agreements (SLAs), associated with them. If you look at the different types of storage environments, there are different technologies that virtualize those today. These allow you to take large blocks of the storage and put them behind a virtual SAN or behind a virtualized environment, which allows you to share those resources amongst multiple server environments.

For example, we have a SAN Virtual Services Platform from HP, in which you can take heterogeneous storage, put it behind this virtual SAN technology, and actually get 30 percent of the capacity back, because all the over-provisioned disks now become available for all those applications sitting on the other side of the virtual SAN. We have a very similar technology from our LeftHand team, with our P4000, it does the same thing for direct-attached storage.

Using technologies like those to grab the excess capacity you have today by doing storage virtualization is very easy to do. It also has a very quick payback, because basically you're getting back 30 percent of your disk, which was over-provisioned, and now can be used. A lot of our customers don't have to buy disks for two to three months, and then when they do buy disk, they can actually put it behind the SAN environment and multiple applications can use it and share it.


For more information on HP's Virtual Services, please go to: www.hp.com/go/virtualization and www.hp.com/go/services


Server consolidation


On the server side of things, server consolidation is very prevalent today, because the new servers are faster than the old. They have more memory capacity. They have virtual I/O built into them. So, it's very simple for you to consolidate servers, and when you are consolidating you use virtual I/O environment or FlexFabric environment. You then have the capability to dial up and dial down the I/O capacity of the server to meet the demand of the virtual machines running on it.

There is the server consolidation with virtualization that everybody knows, but then there is also the big benefit of storage virtualization and the fabric virtualization that can go on. Those are the three pieces. Once you get them in place, you can then start doing the automation, management, and the provisioning of workloads that John talked about much faster.

It's basically virtualizing that whole environment with resiliency and everything built into our ProLiant boxes and high availability business critical system boxes. You get all the capabilities and all the resiliency you need in them, and then you put virtualization on top of the storage networking and servers, and you really get the pool of resources that you can dynamically allocate.

Those three projects are the ones that give you that base from which you can then springboard your projects or your new applications.

Gardner: We've heard so much in the last year or two about cloud computing and private clouds. I think there is some confusion about private cloud. What we are talking about in terms of these converged infrastructures, the virtualization of various major aspects of your infrastructure, and then getting them to work in some harmony as a fabric. Are we talking about the same thing? Is cloud computing and converged infrastructure essentially the same? What is the relationship?

Converged infrastructure can be for public cloud, private cloud, or for a web workload or an high-performance computing (HPC) workload or an SAP workload. It doesn't really matter.



Oathout: A cloud-computing environment is really an application-rich environment that allows you to bring more users on quickly and expand your capabilities and shrink your capabilities as you need them.

Converged infrastructure can be for public cloud, private cloud, or for a web workload or an high-performance computing (HPC) workload or an SAP workload. It doesn't really matter. A converged infrastructure is the optimal deployment of IT to support any kind of application, because it's modular in nature.

It has the flexibility to have more storage, more memory, less CPUs or more CPUs, less storage, or less memory, but it's all modular, so you can put the pieces together as you need them. So, it is a base support for either a cloud environment or a traditional IT environment. It really doesn't matter. It's designed to support both.

A private cloud is the IT department saying, "I'm now going to create a service catalog for my lines of business to develop upfront." You're getting software as a service (SaaS) now sitting on top of either a converged infrastructure or legacy infrastructure. A converged infrastructure is a lot easy to put SaaS on. But, you make that service catalog available to line of businesses, so they can turn on applications as they need them, very quickly.

Optimizing over time

Then, you can put more users on an enterprise resource planning (ERP) application, an online application, or a Web 2.0 application. IT is there as a support service now, setting that up, taking it down, and optimizing it over time, depending on the business needs.

So, private cloud is kind of that SaaS that sits on either a converged infrastructure or a legacy infrastructure or uniquely designed infrastructures that you get from some of the public cloud providers. Converged infrastructure is the optimal way to develop and deploy that in a standard data-center environment, and it's in support of a private cloud.

Gardner: John Bennett, when we think about that earlier imperative around flipping the balance of spending on operations into spending on innovation, when we think about moving toward a private cloud or cloud environment, and we charge people based on their usage, do they factor together? I'm trying to understand how we can both reconcile moving toward cloud and fabric and "blank as a service," and, at the same time, reduce those costs, so that we can get that business benefit and innovation engine roaring?

Bennett: That's a very interesting question. For an organization to make good business decisions, they need to have a very good understanding, not only of the benefits, which I talked about earlier, but of the costs. In this environment you get line of sight into the cost infrastructure, so you know what it costs you to provide services.

The businesses, in turn, know what it costs to take an offering to market, a cost based on reality and not based on just spread out mayonnaise models of financing. It lets them really understand the business and whether or not it's an investment they should make. There are clearly benefits on that side, if you can go that far. The benefit of moving to that services orientation is that it gives you clear insight into the cost structures.

It allowed a smaller number of people to manage the environment, so that the rest of their IT team could work on improving service levels for the store and how to improve getting new applications to the new environment.



Gardner: I'm always an advocate of showing rather than telling. I hope we have some examples to illustrate how some of your clients have undertaken a converged infrastructure initiative, and what some of the outcomes were. Does either of you have any examples today?

Oathout: There is a retailer we worked with called Stein Mart. They had an inflexible infrastructure to run nearly 300 stores in the Americas, and they were struggling to bring new applications on line quickly enough to support the demands of the store environment.

They bought into the converged infrastructure story. They bought into our BladeSystem Matrix product, which is the combination of storage, server, flexible network, software, and services.

We enabled them to run this BladeSystem Matrix environment. It allowed them to spin up applications in hours instead of days. It allowed a smaller number of people to manage the environment, so that the rest of their IT team could work on improving service levels for the store and how to improve getting new applications to the new environment.

Increased productivity

S
tein Mart saw a significant cost reduction, because of the floor space they had in their data center. They saw a significant increase in productivity from their staff. They saw a 2x increase in response time for calls from the stores, and they saw a significant increase in the time to market for new applications. Instead of days, they were taking hours to set up new applications.

The second customer is the Dallas Cowboys. They built a new football stadium in the Dallas area. It's a $1.4 billion investment. In the bottom of the thing is their data center. They run 30 different businesses out of the data center in the Dallas Cowboys stadium.

They have built it on a virtual environment. They have BladeSystems. They have the FlexFabric built into the environment. They went from over 500 servers down to 16 blades, with virtual machines running on them for the point of sale environment within the stadium. It drove a smaller footprint, but also the dynamics in the server and storage environment, so they can bring on new applications for the 30 businesses very quickly.

They changed their infrastructure to support their environment. That's an evolution, versus a Stein Mart, which did a rip and replace to get better productivity to support their business.

Gardner: Any other examples and perhaps ways to demonstrate what HP can bring to this very complex equation?

The infrastructure was there to set up their operating environment on, so that they could run their business relatively quickly.



Oathout: One other example we have is the airport in Dubai, which was a new business, one of the fastest growing airports in the world. They wanted to set up a shared-service environment for their retailers and other businesses around the airport. So, they actually set up a BladeSystem Matrix environment to run their video surveillance, their infrastructure, baggage handling, and all that.

They set up another environment, which allowed their retailers, passport personnel, and other businesses on site to use their shared service environment to really a full service to their client base inside the airport.

So, when a new business, a new government, or a new agency had to come into the airport, they didn’t have to worry about bringing infrastructure with them. The infrastructure was there to set up their operating environment on, so that they could run their business relatively quickly.

Very productive

All three examples: Stein Mart, Cowboys, and Dubai airport, are very productive in how they bring applications online, very reactive to the lines of businesses they are supporting. That's what a converged infrastructure really delivers, besides the lower economic cost that John and I have talked about. It's that efficiency to bring new opportunities to the lines of businesses, accelerate business growth, or increase customer satisfaction.

Gardner: I recall that HP announced converged infrastructure in November 2009, and this is something that I think pulls together a lot of aspects of what HP had been doing for some time. It's a complex process involving people, skills, and different product sets, different professional services, capabilities, and so forth. What makes HP different in terms of how they are accomplishing this notion of DCT, John?

Bennett: What makes us different is that, first of all, we don’t believe one size fits all. We believe that we need to do a good job working with our customers in understanding their strategies and goals and developing an infrastructure strategy that is aligned to that.

We also don’t believe that these infrastructure strategies for the future should have at their core monolithic computing solutions from the past. We also have a very flexible approach in our projects, in that we try to wrap the services we have available around the capabilities of the customer, rather than making them pay to have HP do everything.

Customers who have a great deal of staff, skills, and capabilities with tools . . . will be quite capable of undertaking these efforts on their own.



Customers who have a great deal of staff, skills, and capabilities with tools -- like the Converged Infrastructure Maturity Model and the assessment that goes with that -- will be quite capable of undertaking these efforts on their own.

We try to offer a great deal of flexibility in how we work with customers, and also in how these are implemented. Customers can do these in a traditional customer-owned data-center environment, in an HP hosted environment, or even outsource it to HP. So, there's incredible flexibility based around the customers' needs and interests.

Gardner: You mentioned the maturity model. Is that a potential stepping stone of how to get started on some of these initiatives? Where could some of the folks who are contemplating their next moves architecturally, in terms of transforming their data centers, go to start? How do they get more information?

Oathout: There are two ways to get started. They can contact one of HP’s business partners. Our business partners are enabled to do our Converged Infrastructure Maturity Model. Or, you can come to HP.com/go/ci, and it will take you to the landing page for a converged infrastructure. A client or a customer could click on the Maturity Model, find out about what it can do for them, and then find a practitioner from HP that can come help them, through the Maturity Model, to show them the roadmap or the yellow brick road that John talked about to help them get to converged infrastructure.

Bennett: If the customer is interested in understanding all the things that HP might be able to do with them, they can engage with HP in a Data Center Transformation Experience Workshop, Doug mentioned the CI landing page. They can also go to www.hp.com/go/dct to find out more about that. That will help them take a broader look at the IT infrastructure and facilities and environment, and look at it from a transformational perspective.

Gardner: Focusing on the future, as we look to close up, this strikes me as something that's not just a flash in the pan or a one- or two-year trend. This seems to me a long-term trajectory. This is pretty much an inevitable way in which data centers are going to develop, that is to say converged, fabric, service-oriented, with the efficiency of dynamic provisioning involved. Any thoughts about where this direction is going to take us, and do you agree that this is essentially inevitable?

Economies of scale

Oathout: It is inevitable, just because of the economies of scale, Dana. Truly, when you start bringing a storage and server and networking platforms together through a flexible fabric, the economies of scale of a shared resources and open systems is going to drive down the cost of acquiring IT. Then, with the software and the services capabilities that companies bring to market, they're going to bring the efficiencies along with them.

So, it is inevitable, starting with the simplest of workloads, moving to some of the hardest of workloads, that you are going to have a converged infrastructure. You are going to have application as a service, whether it's internal or external from a cloud provider, just because the economies of scale are there, and the ability to deploy the stuff is so simple once you get it set up that the efficiencies are also there besides the economies of purchase.

Gardner: Any last thoughts John in terms of the future direction and how long of a trend we are talking about here?

Bennett: How long a term is always difficult to say. One of the exciting things about IT in general is that we see this wonderful yin and yang, this give and take between technology advancements and customer expectations and uses. Customers challenge us to step forward to meet tomorrow's problems. Technology evolves, and we challenge customers to take advantage of it for a business benefit.

That's going to continue, as Doug highlighted, the economic value that comes from this convergence of infrastructure, and the economies of scale are very compelling, but I'm not going to predict how long it's going to last.

Gardner: Well, we'll certainly find out, won't we? It's been very good speaking with you. We've been talking about improved data center productivity through a progression to converged infrastructure.

We've been joined by two executives from HP. Doug Oathout is the Vice President of Converged Infrastructure at HP Storage, Servers, and Networking. Thanks so much, Doug.

Oathout: Thank you Dana.

Gardner: Also John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Thanks again, John.

Bennett: And thank you, Dana, and thank you, Doug.

Gardner: And, thank you all for listening. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Transcript of a sponsored BriefingsDirect podcast on achieving cost control and increased utilization through coordinated design, open standards and application-specific infrastructure. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: