Showing posts with label Hewlett-Packard. Show all posts
Showing posts with label Hewlett-Packard. Show all posts

Thursday, June 10, 2010

Shoemaker on How HP CSA Aids Total Visibility into Services Management Lifecycle for Cloud Computing

Transcript of a BriefingsDirect podcast on overcoming higher levels of complexity in cloud computing through improved management and automation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on gaining total visibility into the IT services management lifecycle.

As cloud computing in its many forms gains traction, higher levels of management complexity are inevitable for large enterprises, managed service providers (MSPs), and small-to-medium sized businesses (SMBs). Gaining and keeping control becomes even more critical for all these organizations, as applications are virtualized and as services and data sourcing options proliferate, both inside and outside of enterprise boundaries.

More than just retaining visibility, however, IT departments and business leaders need the means to fine-tune and govern services use, business processes, and the participants accessing them across the entire services lifecycle. The problem is how to move beyond traditional manual management methods, while being inclusive of legacy systems to automate, standardize, and control the way services are used.

We're here with an executive from HP to examine an expanding set of Cloud Service Automation (CSA) products, services, and methods to help enterprises exploit cloud and services values, while reducing risks and working toward total management of all systems and services.

Please join me now in welcoming Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP. Welcome to BriefingsDirect, Mark.

Mark Shoemaker: Hi, Dana. How are you today? I'm really excited about being able to join you.

Gardner: Mark, tell me how we got here. How did complexity become something now spanning servers, virtualization, cloud, and sourcing options? It seems like we’ve been on a long journey and we haven’t necessarily kept up.

Shoemaker: It’s simple. Up until a few years ago, everything in the data center and infrastructure had a physical home, for the most part. Then, virtualization came along. While we still have all the physical elements, now we have a virtual and a cloud strata that actually require the same level of diligence in management and monitoring, but it moves around.

Where we're used to having things connected to physical switches, servers, and storage, those things are actually virtualized and moved into the cloud or virtualization layer, which makes the services more critical to manage and monitor.

Gardner: How are clouds different? Do you need to manage them in entirely different way, or is there a way to do both -- manage both the cloud and your legacy system?

All the physical things

Shoemaker: Enterprises have to do both. Cloud doesn’t get rid of all the physical things that still sit in data centers and are plugged in and run. It actually runs on top of that. It actually adds a layer, and companies want to be able to manage the public and private side of that, as well as the physical and virtual. It just improves productivity and gets better utilization out of the whole infrastructure footprint.

Gardner: And what is it about moving toward automation, perhaps using standards increasingly, that becomes more critical than ever?

Shoemaker: Well, it’s funny. A lot of IT people will tell you we’ve always been talking about standards. It’s always been about standards, but they've not always had the choice.

A lot of times, the business definition of what it took to be successful and what business applications they needed to run that, dictated a lot of the infrastructure that sits in our data centers today. With cloud computing -- and the automation and virtualization that goes along with that -- standardization is key.

You can’t automate a repetitive task, if it’s changing all the time. The good thing about cloud and virtualization is that they're absolutely driving standards, and IT is going to benefit from that. The challenge is that now it's more fluid and we’ve got to do a better job than we’ve ever had to of managing, monitoring, and keeping up.

The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.



Gardner: What is it about the human management, the sort of manual approach, that doesn’t scale in this regard?

Shoemaker: IT has been under the gun for a few years now. I don’t know many IT shops that have added people and resources to keep up with the amount of technology they have deployed over the last few years. Now, we're making that more complex.

They aren't going to get more heads. There has to be a system to manage it. Plus, even the best people, when it’s in the middle of the night, you're tired and you’ve been up a long time trying to get something done, you're always at the risk of making a mistake on a keyboard or downloading the wrong file or somebody missing a message that they need to see.

Any time we can take the mundane and the routine up to let our high-value assets really focus on the business critical functions, that’s going to be a good thing. The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.

Gardner: I suppose too that organizations have had in the past the opportunity to control what goes on in their organization, but as you start acquiring services, you don’t really have control as to what’s going on behind the support of those services. So, we need to have management that elevates to a higher abstraction.

Shoemaker: That’s a great point and that’s one of the things we’ve looked at as well. Certainly, there is no silver bullet for either one of these areas. We're looking at a more holistic and integrated approach in the way we manage. A lot of the things we're bringing to bear -- CSA, for example -- are built on years of expertise around managing infrastructures, because it’s the same task and functions.

Ensuring the service level

Then, we’ve expanded those as well to take into account the public cloud need of being a consumer of the service, but still being concerned with the service levels, and been able to point those same tools back into a public cloud to see what’s going on and making sure you are getting what you are paying for and what the business expects.

Gardner: You have a pretty good understanding of the problem set. What about the solution from a high level? How do you start managing to gain the full visibility and also be able to control to turn those dials and govern throughout this ecosystem?

Shoemaker: You’ve hit on my two favorite words. When we talk about management, it starts with visibility and control. You have to be able to see everything. Whether it’s physical or virtual or in a cloud, you have to be able to see it and, at some point, you have to be able to control its behavior to really benefit.

Once you marry that with standards and automation, you start reaping the benefits of what cloud and virtualization promise us. To get to the new levels of management, we’ve got to do a better job.

Gardner: We’ve looked at the scale of the problem. Lets look at the scale of the solution. This isn’t something that you can buy out of a box. Tell me what HP brings in terms of its breadth and scope that have a direct relationship to the scope and breadth of the solution itself.

Nobody does that. There’s not one product and there’s not going to be one product for any period of time.



Shoemaker: Again, there is no silver bullet here. There is no one application. It’s going to take you all the way from the planning phase, to development, to testing and load testing, to infrastructure as a service (IaaS). You stand at the hardware and start building the management pieces and the platform that provide the underlying application that you develop on and then run and assure that service for whoever your consumer is.

Nobody does that. There’s not one product and there’s not going to be one product for any period of time. We'd love to get there and certainly we're going to do everything we can to make it easier.

The great thing about what HP brings to the table is that in every one of those areas I mentioned, there is an industry-leading solution that we're integrating to give you that control across your entire breadth of management that you need to be successful in today’s new infrastructure, which is cloud and virtualization on top of physical.

Gardner: Back on May 11, HP had a fairly large set of news releases, the delivery of some new products, as well as some vision, and the CSA products and services. Perhaps you could give us a little bit of an idea of the philosophy behind CSA and how that fits into this larger set of announcements.

Listened to customers

Shoemaker: CSA is the product of several years of actually delivering cloud. Some of the largest cloud installations out there run on HP software right now. We listened to what our customers would tell us and took a hard look at the reference architecture that we created over those years that encompassed all these different elements that you could bring to bear in a cloud and started looking, how to bring that to market and bring it to a point where the customer can gain benefit from it quicker.

We want to be able to come in, understand the need, plug in the solution, and get the customer up and running and managing the cloud or virtualization inside that cloud as quickly as possible, so they can focus on the business value of the application.

The great thing is that we’ve got the experience. We’ve got the expertise. We’ve got the portfolio. And, we’ve got the ability to manage all kinds of clouds, whether, as I said, it’s IaaS or platform as a service (PaaS), that your software's developed on, or even a hybrid solution, where you are using a private cloud along with a public cloud that actually bursts up, if you don’t want to outlay capital to buy new hardware.

We have the ability, at this point, to tap into Amazon’s cloud and actually let you extend your data center to provide additional capacity and then pull it back in on a per-use basis, connected with the rest of your infrastructure that we manage today.

The other cloud that we are talking about is a combination of physical and virtual. Think about a solution that maybe didn’t fit well in a virtual or a cloud environment -- databases, for example, high IO databases. We would be able to bridge the physical and the virtual, because we manage, maintain, and build with the same tool sets on the physical and virtual side.

A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical.



Gardner: I mentioned earlier that these are the same problems that large enterprises, managed service providers, even SMBs that are looking toward outsourcing services are all facing. Is there like a low-lying fruit here, a place to start across these different types of organizations or maybe specific to them? Where do you start applying the management in this sort of total sense?

Shoemaker: Again, it goes back to visibility and control. A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical. They have a very large physical presence as well. Most of them are using a disparate set of tools to try to manage all those different silos of data.

The first thing is to gain that visibility and control by bringing in one solution that can help you manage all of your servers, network, and storage as one unit, whether physical or virtual. Then, move all of your day-to-day task via automation into that system to take the burden off of your IT up schemes.

Gardner: If we make this approach either through standards or standard methodologies and implementations or references as both the service provider and the enterprise, does that give us some sort of a whole greater than the sum of the parts when it comes to management?

Shoemaker: Yeah, I think so. Certainly, from a scale and utilization perspective, we definitely have more synergies, if we are acting as one. So the ability to move things around, the ability to make sure all of the standards are being upheld, things that are being built or being built in the standards, and having that assurance of being able to see all of these different compliance issues for them become problems.

Gardner: Okay, so should enterprises be asking their managed service providers (MSPs) about the management they are using?

Shoemaker: Absolutely. If you are looking at an MSP, that MSP should be able to give you the same visibility and control that you have internally.

Gardner: From the May 11 news, give us a little recap about what you came to the market with in CSA. Is this product and services or just products? How does the mix fit?

Best in class

Shoemaker: We announced CSA on May 11, and we're really excited about what it brings to our customers. What we are able to do is bring our best-in-class, industry-leading products together and build a solution that allows you to control, build, and manage a cloud.

We’ve taken the core elements. If you think about a cloud and all the different pieces, there is that engine in the middle, resource management, system management, and provisioning. All those things that make up the central pieces are what we're starting with in CSA.

Then, depending on what the customer needs, we bolt on everything around that. We can even use the customers’ investments in their own third-party applications, if necessary and if desired.

Gardner: Let’s look at some examples. I'm interested in understanding this concept of total management, the visibility to control across physical, virtual, and various cloud permutations. Give me an idea of how this physical to virtual scenario works and how different types of applications, maybe transactional and web services based ones, can benefit.

Shoemaker: As I mentioned before, one of the examples we use is a database, a high IO database with lot of reads and writes. That may not be best suited for a cloud or virtual environment, where the web service front-end and the middle layer may be fine.

So, it goes back to singular visibility and that singular control point to manage your cloud and your physical.



Because we use the same management suite to manage the physical and the virtual, we were able to mesh those two systems to create a singular system that’s managed and looks like one system, but actually sits in the physical in the virtual realm. The customer doesn’t have to bring all of the applications back into a physical element and not get deficiencies that cloud has for the pieces that don’t need it, just to satisfy the database need.

Gardner: Is there a second use case or environment in which this total management benefit also fits in?

Shoemaker: Let’s say it's a customer, an MSP customer in this case or a customer that’s turning up new physical cloud elements. The VMWare ESX server still has to be built on a physical server. With our solution, we are able to actually build that ESX server, based on a pre-defined set of criteria, image that server onto the physical hardware, and bring it into the environment, with the same suite of tools. So, it goes back to singular visibility and that singular control point to manage your cloud and your physical.

Gardner: And is that important perhaps for regulatory or compliance issues?

Shoemaker: Absolutely. Virtual and physical have the same compliance with regulatory requirements. Virtual and cloud probably have a little bit more difficult time just based on the shared environment that’s naturally occurring. A lot of emphasis is being put on the security elements in cloud today. So, the compliance piece of what we offer actually reduces that risk for our customers.

Gardner: How about the deployment choices movement? As organizations experiment with cloud, perhaps they start moving development, and ultimately workloads, out to a third-party cloud. How do you manage that transition? I guess this is the hybrid cloud management problem.

As cloud takes off

Shoemaker: We talked a little bit earlier about some of the work we’ve done around some of the Cloud Assure products, where we can help expand cloud infrastructure into a public environment. We see that becoming more prevalent as cloud takes off.

Right now, a lot of people experiment with development and test, much like they did in the virtual initial start-up period. We see that relationship becoming more of a broker relationship, where you may pick where you put your application to run in that public cloud. Build it in-house in the private and move it out into that public realm.

Think about this: A lot of countries have different regulatory controls, laws, and regulations around where data can be stored. If you're doing business in some European countries, they want you to have the actual service running inside the country, so the data stays in there.

In the past, they'd find an MSP in that country, building all the infrastructure, and managing everything that goes along with that. So, as the country of record, the data has to be there. Now, we have the ability to actually create that image in the cloud, push that image to a cloud provider in that country, and have that application run holistically on premise inside of the borders of the country, but still report back to the larger piece. This gets us around a hurdle that’s been a challenge with physical infrastructure.

Gardner: Let’s take a look to the future. As companies will be approaching cloud from a variety of perspectives, there are different vertical industries involved, and different geographies. It's kind of a mess, a stew of different approaches. What do you think is going to happen in the future? I think cost and competitive issues are going to drive companies to try to do this. They're going to hit this speed bump about management. Where do you see HP’s offerings going in order to help them address that?

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm?



Shoemaker: In a lot of cases, HP’s offerings are already there and many are aspects of the functionality. Certainly, we're working hard to make sure we integrate the solutions, so they act together more cohesively and provide more value to our customers from day one.

As the landscape changes, we're looking at how to change our applications as well. We’ve got a very large footprint in the software-as-a-service (SaaS) arena right now where we actually provide a lot of our applications for management, monitoring, development, and test as SaaS. So, this becomes more prevalent as public cloud takes off.

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm.

Gardner: Are there ways of getting started? Are there resources, places online that folks can go to for gearing up for that future?

Shoemaker: There's a robust cloud community out there today, but HP also has a robust practice around helping our customers plan for those exact things. Our Services group provides workshops, learning engagements, and even planning and execution help for a lot of our largest customers today that are planning and positioning for tomorrow. So, we have that expertise and we're actually actively supporting our customers today.

Gardner: We’ve been talking about gaining total visibility into services management lifecycle. We're looking at this through the movement from virtualized to services and sourcing options. We’ve been talking with an HP executive about Cloud Service Automation products and services and how, in the future, total governance is going to become more the norm and more a necessity, as organizations try to avail themselves of more cloud and IT shared services opportunities.

I want to thank Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP. Thanks for joining, Mark.

Shoemaker: Thanks so much, Dana. I appreciate you having us on.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Transcript of a BriefingsDirect podcast on overcoming higher levels of complexity in cloud computing through improved management and automation. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, May 18, 2010

IT's New Recipe for Success: Modernize Applications and Infrastructure While Taking Advantage of Alternative Sourcing

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on improving overall data-center productivity by leveraging all available sourcing options and moving to modernized applications and infrastructure.

IT leaders now face a set of complex choices, as they look for ways to manage their operational budget, knowing that discretionary and capital spending remain tight, even as demand on their systems increases.

One choice that may be the least attractive is to stand still as the recovery gets under way and demands on energy and application support outstrips labor, systems supply, and available electricity.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions. Chances are that demands on IT systems to meet growing economic activity will occur before IT budgets appreciably open up.

So what to do? Our panel of experts today examines how to gain new capacity from existing data centers through both modernization and savvy exploitation of all sourcing options. By outsourcing smartly, migrating applications strategically, and modernizing effectively, IT leaders can improve productivity, while operating under tightly managed costs.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions.



We'll also look at some data-center transformation examples with some executives from HP to learn how effective applications and infrastructure modernization improves enterprise IT capacity outcomes. And, we'll examine modernization in the context of outsourcing and hybrid sourcing, so that the capacity goals facing IT leaders can be more easily and affordably met, even in the midst of a fast-changing economy.

As we delve into applications and infrastructure modernization best practices, please join me in welcoming our panel: Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Welcome, Shawna.

Shawna Rudd: Thank you.

Gardner: We're also here with Larry Acklin, Product Marketing Manager for Applications Modernization Services at HP. Welcome, Larry.

Larry Acklin: Hello.

Gardner: And, Doug Oathout, Vice President for Converged Infrastructure in HP’s Enterprise Services. Welcome, Doug.

Doug Oathout: Thank you, Dana. I'm glad to be here.

Gardner: Let me start with you, Doug. We're seeing some additional green shoots now across the economy, and IT services are also being taxed by an ongoing data explosion, the proliferation of mobile devices, use of social media, and new interfaces. So, what happens when the supply of budget -- that is to say, the available funding for innovation in new applications -- is lacking, even as the demand starts to pick up? What are some of the options that IT leaders have?

Tackling the budget

Oathout: Dana, when you look at the budgets still being tight in the tight economy, but business is starting to grow again, IT leaders really need to look strategically at how they're going to tackle their budget problem.

There are multiple sourcing options, there are multiple modernization tasks as well as application culling that they could do to improve their cost structure. What they need to do is to start to think about how, and what major projects they want to take on, so that they can improve their cash flow in the short-term while improving their business outcomes in the long-term.

At HP, we look at: how do I source products that are more beneficial to me -- outsourcing cloud and such -- to give us a better economic picture, and also using modernization techniques for application and infrastructure to improve the long-term cost structures.

At HP we also look at modernization of the software, and we look at outsourcing options and cloud options as ways to improve the financial situation for IT managers.

Gardner: Looking at this historically, have the decisions around outsourcing been made separately from decisions around modernization and infrastructure? Is it now time to bring two disparate decision processes together?

Oathout: Yes. In the past, companies have looked at outsourcing as a final step to IT, versus an alternate step in IT. We're seeing more clients, especially in the tight economy, that we have gone through, looking at a hybrid model.

How do I source things smartly that are non-mission critical or non-business critical to me to the outside world and then keep the stuff that is critical to my business within the four walls of the data center? There is a model evolving, a hybrid model between outsourcing and in-sourcing of different types of applications in different types of infrastructure.

Gardner: Let's go to you, Shawna. When we think about the decisions around sourcing, as Doug just pointed out, there seems to be a different set of criteria being brought to that. How do you view the decision-making around sourcing options as being different now than two, three or five years ago?

Rudd: Clients or companies have a wider variety of outsourcing mechanisms to choose from. They can choose to fully outsource or selectively out-task specific functions that should, in most cases, be able to provide them with substantial savings by looking at their operating expenses. Alternatively, as Doug just pointed out, we can provide many transformational and modernization type of projects that don’t require any outsourcing at all. Clients just have a wider variety of options to choose from.

Gardner: To you, Larry. As folks look at their current infrastructure and try to forecast new demands on applications and what new applications are going to be coming into play, are they faced with an either/or? Is this about rip and replace? How does modernization fit differently into this new set of decisions?

Acklin: It's definitely becoming a major challenge. The problem is that if you look purely at outsourcing in order to have additional investment for innovation, it will take you so far. It will take you to a point.

There needs to be a radical change in most businesses, because they have such a build-up of legacy technology, applications and so forth. There needs to be a radical change in how they move forward so they can free up additional investment dollars to be put back into the business.

Realigning the business and IT

More importantly, it's necessary to realign the business and the application portfolio, so that they're working together in order to address the new challenges that everyone is facing. These are challenges around growth: How do you grow so that, when you come out of a tough economy situation, the business is ready to go.

Investors are expecting that your company is going to accelerate into the future, providing better services to your market. How can you do that when your hands are completely tied, based on your current budget?

You know your IT budgets aren't going to increase rapidly, that there may be a delay before that can happen. So how do you manage that in the interim? That’s really where the combination of modernization and using various sourcing options is going to add additional benefit to be an enabler to get you to that agility that you want to get to.

Gardner: Larry, what would be some of the risks, if this change or shift in thinking and approach doesn't happen? What are some of the risks of doing nothing?

Acklin: We call that "the cost of doing nothing." That's the real challenge. If you look at your current spend and how you are spending your IT budgets today, most see a steady increase in expenses from year-by-year, but aren't seeing the increases in IT budgets. By doing nothing, that problem is just going to get worse and worse, until you're at a point where you're just running to keep the lights on. Or, you may not even be able to keep up.

The number of changes that have been requested by the business continues to grow. You're putting bandages on your applications and infrastructure to keep them alive. Pretty soon, you're going to get to a point, where you just can't stay ahead of that anymore. This is the cost of doing nothing.

If you don’t take action early enough, your business is going to have expectations of your IT and infrastructure that you can't meet. You're going to be directly impacting the ability for the company to grow. The longer you wait to get started on this journey to start freeing up and enabling the integration between your portfolio and your business the more difficult and challenging it's going to be for your business.

Gardner: Doug and Shawna, it sounds as if combining the decisions around modernizing your infrastructure and applications with your sourcing options is, in a sense, an insurance policy against the unknown. Is that overstating the opportunity here, Shawna?

Rudd: I don’t think so. Obviously, to Larry’s point, it's not going to get any cheaper to continue to do nothing. To support legacy infrastructure and applications it's going to require more expensive resources. It's going to require more effort to maintain it.

The same applies for any non-virtualized or unconsolidated environment. It costs more to manage more boxes, more software, more network connections, more floor space, and also for more people to manage all of that.

Greater risk

The risk of managing these more heterogeneous, more complex environments is going to be greater -- a greater risk of outages -- and the expense to integrate everything and try to automate everything is going to be greater.

Working with a service provider can help provide a lot of that insurance associated with the management of these environments and help you mitigate a lot of that risk, as well as reduce your cost.

Gardner: Doug, we can pretty safely say that the managed service providers out there haven’t been sitting around the past two or three years, when the economy was down. Many of them have been building out additional services, offering additional data and application support services. So, IT departments are now not only competing against themselves and their budgets, they are competing against managed service providers. How does that change somebody’s decision processes?

Oathout: It actually gives IT managers more of a choice. If you look at what's critical to your business, what's informational to your business, and you look at what is kind of the workflows that go on in your business, IT managers have many more choices of where they want to go source those applications or those job functions from?

As you look at service providers or outsourcers, there is a better menu of options out there for customers to choose from. That better menu allows you to compare and contrast yourself from a cost, service availability, and delivery standpoint, versus the providers in the marketplace.

IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.



We see a lot of customers really looking at: how do I balance my needs with my cost and how do I balance what I can fit inside my four walls, and then use outsourcing or service providers to handle my peak workloads, some of my non-critical workloads, or even handle my disaster recovery for me?

So IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.

Gardner: Let’s look at how you get started. What are some of the typical ways that organizations explore sourcing options and modernization opportunities? As I understand it, you have a methodology, a basic three-step approach: outsource, migrate, and modernize.

Let’s take each one of these and start with outsourcing smartly. Shawna, what does that mean, when we talk about these three steps in getting to the destination?

Rudd: From an outsourcing standpoint, it’s simply one mechanism that clients can leverage to facilitate or help facilitate this transformation journey that they may be looking to, as they go on to help generate some savings, which will help fund other maybe more significant modernization or transformational efforts.

We help clients maintain their legacy environments and increase asset utilization, while undertaking those modernization and transformation efforts. From an outsourcing standpoint, the types of things that a client can outsource could vary, and the scope of that outsourcing agreement could vary -- the delivery mechanism or model or whether we manage the environment at a client’s facility or within a leveraged facility.

Bringing value

All those variables can bring value to a client, based upon their specific business requirements. But then, as the guys will talk about in a second, the modernization or the migration and the modernization yields additional savings to those clients’ business.

So, from an outsourcing standpoint, it’s that first thing that will help generate savings for a client and can help fund some of the efforts that will generate incremental savings down the road.

Gardner: The second step involves migration. Who wants to handle that, and what does that really mean?

Oathout: Let me start and then I'll hand it over to Larry. When we talk about migration, we can look at different types of applications that migrate simply to modern infrastructure. Those applications can be consolidated onto fewer platforms into a more workflow-driven automated process.

We can get a 10:1 consolidation ratio on servers. We can get a 5-6:1 consolidation ratio on storage platforms. Then, with virtual connectivity or virtual I/O, we can actually have a lot less networking gear associated with running those applications on the servers and the storage platform.

When you look at modernizing your applications and look at modernizing infrastructure, they have to match.



So, if we look at just standard applications, we have a way to migrate them very simply over to modern infrastructure, which then gives you a lower cost point to run those applications.

Gardner: Now, not all applications are created or used equally. Is there a difference between what we might refer as core or context applications, and does that come into play when we think about this migration?

Oathout: Oh, it definitely does. There are some core applications that are associated with certain platforms that we can consolidate on the bigger boxes, and you get more users that way. Then, there are context applications, which are more information-driven, and which can easily continue to grow. That's one of the application areas that continues to grow, and you can't see how fast it's going to grow, but you can scale that out onto modern platforms.

As you have more work, you have more information, and you can grow those systems over time. You don't have to build the humongous systems to support the application, when it’s just starting out. You can build it over time.

There's a lot we can do with the different types of applications. When you look at modernizing your applications and look at modernizing infrastructure, they have to match. If you have a plan, you don't have to buy extra capacity when you start. You can buy the right capacity then grow it, as you need it.

Specific path

Acklin: Let me add a little bit to that. When we look at these three phases together, we ordered them this way for a specific path to minimize the risk as part of it. Outsourcing can drive some initial savings, maybe up to 40 percent, depending on the scope of what you're looking at for a client. That's a significant improvement on its own.

Not every client sees that high of a saving, but many do. The next step, that migration step that we’ve talked about, where we’re also migrating over to a consolidated infrastructure, allows you to take immediate actions on some of your applications as well.

In that application space, you can move an application that may be costing you significant amounts of the dollars whether it be, license fees or due to a lack of skilled resources and so forth on a legacy platform. Migrating those or keeping the application intact, running on that new infrastructure, can save you significant dollars, in addition to the initial work you did as part of the outsourcing.

The nice thing, as you do these things in parallel, is that it's a phase journey that you are going through, where they all integrate. But, you don't have to. You can separate them. You can do them one without the other, but you can work on this whole holistic journey throughout.

The migration of those applications, basically leaves those applications intact, but allows them to have a longer lifespan than you may typically would. A great example of this is, if you had an application that you want to eventually replace with a ERP system of some sort, or that business process is going to be changed in the future in some way, but we still need to do something about this cost-saving problem today.

When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.



It's a great middle step. We can still drive significant 40-50 percent saving, just through this migration phase of moving that application onto this new infrastructure environment and changing the way that those cost structures around software and so forth are allocated towards that. It frees up short-term gain that can turn around to be reinvested in the entire modernization journey that we're talking about.

Gardner: So, if I understand that correctly, when we get to the modernization phase, we've been able to develop the capacity and develop a transformation of the budget from operations into something that can be devoted to additional new innovation capacity.

Acklin: Right. Then as you continue that journey, you're starting to get your cost structures aligned and you're starting to get to a place where your infrastructure is now flexible and agile. You’ve got the capacity to expand. When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.

Future technologies allow us to enable the business for growth in the marketplace. Right now, many of our applications handcuff the business. It takes months to get a new product or service out to the market. By changing over to a service-oriented model, you're saving a lot of cost component here, but you're adding that agility layer to your applications and allowing your business to expand and grow.

Gardner: Before we go to some examples, I'm curious about what happens. What benefits can occur when you play these three aspects of this journey together?

There is sort of a dance, if you will, of three partners. When you apply them to the specific needs, requirements, and growth patterns within specific companies, what types of benefits do we get? Is this about switching to a more pay-as-you-go basis? Is this about reduced labor or improved automation?

Let's start with you, Shawna. What are some of the paybacks that companies typically get when they do this correctly?

Some 30 percent savings

Rudd: They can achieve about 30 percent savings, obviously depending on what they outsource and how much they outsource. Those savings will be achieved through the use of best-shore resources through the right sizing of their hardware and software environments, consolidation, virtualization, automation, standardization, processes, and technologies.

And, then they'll achieve incremental cost savings. As Larry said, it can be upward of 40-60 percent from migrating some of that low-hanging fruits, or those applications that are easily lifted and shifted to lower cost platforms. So, they'll reduce the associated IT and application expenses that are also the ongoing management expense. Then, as they continue to modernize those environments, they'll achieve additional efficiencies and potentially some additional savings.

In that scenario, in which they have combined everything, when they work with a single source provider to help them go through that journey and help facilitate that journey, the transitions, the hand-offs, and all of that should go much more smoothly.

The risk to the client, to the client's business, should be better mitigated, because they're not having to coordinate with four or five different vendors, internal organizations, etc. They have one partner who can help them and can handle everything.

Gardner: Doug, to you. When this is done properly, what are some of the high-level payoffs? What changes in terms of productivity at the most general level?

IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.



Oathout: The big thing that changes, Dana, is that when you go through this journey at the end, IT is aligned to the businesses. So, when a business wants to bring on a new application or a new product line, IT can then respond and stand up a new application in hours instead of months.

They can flex the environment to meet a marketing campaign, so you have the ability to do the transactions when a major TV advertisement goes on or when something happens in the industry. You get the flexibility and you get the efficiencies, but what you really get is IT is acting as a service provider to the line of business, and IT is now a partner with the business versus being a cost center to that business.

That's the big transformation that happens through this three-step process. IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.

You move from deploying an application in months to two hours. The productivity of your IT department gets two or three times better. You can now plan to run your data centers or your IT at normal workloads. Then, when peaks come in, you can outsource some of the work to service providers or to your outsource partner.

Your actual IT is running at average load, and you don't have to put all the extra equipment in there for the peak. You actually outsource it, when that peak comes. So, at the end of this journey, there is a whole different business model that is much more efficient, much more elastic, and much more cost-effective to run the business of the future.

Gardner: Larry, to you. What are your more salient takeaways in terms of benefits from doing this all correctly?

Don't have to wait

Acklin: I’ll just add to what Shawna and Doug have said already. One of the bigger benefits that you achieve is that the business doesn't have to wait. Many times, if you're a CIO, you have to tell your business-owners that you've got to wait. "I need to go through. I'm in the midst of this outsourcing operation. I'm trying to change the way we're providing service to the business." That can take time."

The idea of putting the outsourced, migrated, modernized phases together is that they're not sequential. You don't have to do one, then the other, and then the other. You can actually start these activities in parallel. So, you can start giving benefits back to the business immediately.

For example, while you're doing the outsourcing activities and getting that transition set up, you're starting to put together what your future architecture is going to look like for your future state. You have to plan how the business processes should be implemented within the application and the strategic value of each application that you currently have in your portfolio.

You're starting to build that road map of how you are going to get to the end state. And then Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.

You really build that partnership between the two. So, when you reach the end, that is the completely well-oiled machine working together -- both the business and IT -- to reach their objectives.

Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.



Gardner: Let’s look at some examples that we mentioned earlier. This can vary dramatically from organization to organization, and coming at this from different angles means that they might prioritize it in different ways. Perhaps we can look at a couple of examples to illustrate how this can happen and what some of the payoffs are. Who wants to step up first for an example on doing these three steps?

Oathout: I'll go first. One example that we worked very closely was in services with our customer France Telecom. France Telecom transitioned 17 data centers to two green data centers. Their total cost of ownership (TCO) calculation said that they were going to save €22 million (US $29.6 million) over a three-year period.

They embarked on this journey by looking at how they were going to modernize their infrastructure and how they were going to set up their new architecture so that it was more flexible to support new mobile phone devices and customers as they came online. They looked at how to modernize their applications so they could take advantage of the new converged infrastructure, the new architectures, that are available to give them a better cost point, a better operational expense point.

France Telecom is a normal example where you consolidate 17 data centers to two, but it’s not abnormal when a company goes through this three-step process, to make a significant change to the IT footprint, make a significant change in how they do their business to support the lines of businesses that require new applications and new users to come online relatively quickly.

Gardner: Doug, how would you characterize the France Telecom approach? Which of the three did they emphasize?

Emphasis on migration

Oathout: They really emphasized the migration as the biggest one. They migrated a number of applications to newer architectures and they also modernized their application base. So, they focused on the last two, the modernization and the migration, as the key components for them in getting their cost reductions.

Gardner: Okay, any other examples?

Acklin: I'll talk about another one. The Ministry of Education in Italy (MIUR) is another good example, where a client has gone on this whole journey. In that situation, they had outsourced some of their capabilities to us -- some of their IT management. But, they were challenged with some difficult times. The economy hit them hard, and being a government agency, they were under a lot of pressure to consolidate IT departments globally.

It’s a very, very large organization built up over the years. Most of the applications were built back in the early 1980s or earlier than that. They were mainframe-based, COBOL, CICS, DB2 type applications, and they really weren’t servicing the business very well. They were really making it a challenge.

In addition to all of the legacy technologies, the CIO also had the challenge of consolidating IT departments. They had distributed IT departments. So, they had to consolidate their IT departments as part of this activity.

On top of all that, they were given the challenge to reduce their headcount significantly due to the economic crisis. So, it became a very urgent journey for this client to go on, and they began going through that. Their goal was, as I said, reducing IT, improving agility, being able to respond to change, and doing a lot more with a lot less people in a consolidated manner.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.



As they went through their transformation, they went through the whole thing. They assessed what they had. They put their strategy together and where they wanted to go. They figured out what applications they needed and how they were going to operate.

They optimized the road map for them to reach their future state, established a governance program to keep everything in alignment while they went on this journey, and then they executed this journey.

They used a variety of methods for modernizing their applications and migrating over to the lower cost platforms. Some of them they re-architected into new service-based models to provide services to their students and teachers through the web.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.

Those are significant challenges that the CIO took on, and the combination of improving their applications and infrastructure through outsourcing and modernization model helped them achieve their goal. The CIO will tell you that they could never have survived all the pressure they were under without going on a journey like this.

Gardner: Shawna, do we have a third example?

No particular order

Rudd: This is an example, not naming a specific client, but also making another point, that the things we're talking about don't have to occur in this particular order -- this one, two, three step order.

I know of other clients for whom we've saved around 20 percent by outsourcing their mainframe environments. Then, after successfully completing the transition of those management responsibilities, we've been able to further reduce their cost by another 20 percent simply by identifying opportunities for code optimization. This was duplicate code that was able to be eliminated or dead code, or runtime inefficiency that enabled us to reduce the number of apps that they required to manage their business. They reduced the associated software cost, support cost, etc.

Then there were other clients for whom it made more sense for us to consider outsourcing after the completion of their modernization or migration activities. Maybe they already had modernization and migration efforts underway or they had some on the road map that were going to be completed fairly quickly. It made more sense to outsource as a final step of cost reduction, as opposed to an upfront step that would help generate some funding for those modernization efforts.

Gardner: For those folks who see the need in their organization and understand the rationale behind these various steps, where do they get started, how can they find more information? Let me start with you, Doug. Are the information resources easily available.

Oathout: Well, Dana, there are a ton of different places to start. There's your HP reseller, the HP website, and HP Services. If a customer is thinking about embarking on this journey, I'd contact HP Services and have them come out and do a consulting engagement or an assessment to lay out the steps required.

If you're embarking on the journey on modernization, contact your HP reseller and HP seller and have them come show you how to do consolidation and virtualization to really modernize your infrastructure. If you're having the conversation about applications, contact HP Services. They can look at your application portfolio and show you the experience that they have in modernizing those applications or migrating those applications to modern equipment.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way.



Gardner: Any additional paths to how to start from your perspective, Larry?

Acklin: Let me add to that. If you're in situation where you think modernization, but you're not positive, you're still trying to get a good understanding of what's involved, go on one of these trainings. We offer something that's called the Modernization Transformation Experience Workshop. It's basically a one-day activity workshop, a slide-free environment, where we bring you and take you through the whole journey that you'll go on.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way, whether you are talking technology infrastructure, architecture, applications, business processes, or even the change management of how it impact your people.

We go through that entire journey through this workshop. So you come out understanding what's you're getting yourself into and how it can really affect you as you go forward. But, that's not the only starting point. You can also jump into this modernization journey at any point in the space.

Maybe, for example, you've already figured out that you needed to do this, maybe you've tried some things on your own in the past, but really need to get external help. We have assessment activities that allow us to jump in at any point along this journey.

Whether it's to help you see where there are code vulnerabilities within your existing applications that visually show you what those things look like and where opportunities are for modernization, or whether it's to do a full assessment of your environment and figure out how your apps and your infrastructure are working for your business or, in most cases not working for your business, it allows you to jump in at any stage throughout that whole journey.

As Doug mentioned, HP can help you figure out the right place for beginning that journey. We have hundreds of modernization experts globally who can help you figure out where to start.

Gardner: Do we have any other closing thoughts on the process of getting started?

Acklin: Let me just mention one other item. We talked about this cost of doing nothing. Don't let any fears or doubts about this journey stop you from beginning the journey. There are many things that can get you in trouble with that cost of doing nothing. That time is coming for you, when you're not going to be able to make those changes. So, don't let those fears stop you from going on that journey.

An example of this is financial. Many of our clients we talk to, don’t know how they would pay for a journey like this. Actually, you have a lot of options right in front of you that you can take advantage of. Our modernization consultants can give you some good methods on how to cover this, how to put things together like these three phase activities, or how to go on these journeys that can still work for you even in tough financial times.

Gardner: Great. We've been talking about improving overall data-center productivity by leveraging available sourcing options as well moving to modernized applications and infrastructure. I want to thank our guest for today's panel. We've been here with Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Thank you, Shawna.

Rudd: Thank you.

Gardner: And Larry Acklin, Product Marketing Manager for Application Modernization Services at HP. Thank you, Larry.

Acklin: Thank you.

Gardner: And Doug Oathout, Vice President of Converged Infrastructure at HP Enterprise Services. Thanks, Doug.

Oathout: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Thursday, February 11, 2010

Smart Grid for Data Centers Better Manages Electricity to Slash IT Energy Spending, Frees-Up Wasted Capacity

Transcript of a BriefingsDirect podcast on implementing energy efficiency using smart grids in enterprise data centers to slash costs and gain added capacity.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on gaining control over energy use and misuse in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Nowadays, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It’s a lifecycle approach from the data centers to full automation benefits. Automation software for capacity planning and monitoring has been designed and improved to best match long-term energy needs and resources in ways that cut total cost, while gaining the truly available capacity from old and new data centers.

These so-called smart grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. Done well, such solutions as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity by 30-50 percent just by gaining control over energy use and misuse.

We're here today with two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Please join me in welcoming Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP. Welcome Doug.

. . . The drivers behind data center transformation are customers who are trying to reduce their overall IT spending . . .



Doug Oathout: Thank you, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Welcome back to the show, John.

John Bennett: Thank you very much, Dana. Glad to be here.

Gardner: John, let me start with you, if you don’t mind. Let’s set up a little bit of the context for this whole energy lifecycle approach. It’s not isolated. It’s part of a larger set of trends that we loosely call data center transformation (DCT). What’s going on with DCT and how important is the role these energy conversation approaches play?

Bennett: DCT, as we’ve discussed before Dana, is focused on three core concepts, and behind it, energy is another key focus for that work. But, the drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects, business priorities, and innovation in support of the business and business growth.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT. The issues that customers have in executing on this, of course, is that the facilities, their people, their infrastructure and applications, everything they are spending and doing today -- if they don’t change it -- can get in the way of them realizing these objectives.

Data center strategy

So, DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

More recently, we in the industry have been focused on the infrastructure and layout of the data center. Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

I think the best control of energy is probably better described as built-in and not layered on.



Gardner: You know, John, it’s interesting. When I hear you describe this, it often sounds as if you are describing security. I know that sounds odd, but security has some of the same characteristics. You can’t look at it individually. It needs to be taken in as a comprehensive view that there are risks associated, and that it becomes management-intensive. Maybe we can learn from the way in which people approach security. Perhaps, they should also be thinking along similar lines when they approach energy as a problem?

Bennett: That’s an interesting analogy, and the point I would add to that, Dana, is that the best security is built-in, not layered on. I think the best control of energy is probably better described as built-in and not layered on.

Gardner: Let’s go to Doug. Doug. Tell me what the problem is out there. What are folks facing and how inefficient are their data centers really? What kind of inefficiency is common now?

Oathout: Dana, what we're really talking about is a problem around energy capacity in a data center. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Problem area

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

The way to do that is to get the envelope cleared up so we know how much is left. What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Gardner: John, we’ve already done a podcast on converged infrastructure, and I don’t want to belabor that point too much, but it strikes me that going about this data center energy exercise in alignment with a converged-infrastructure approach would make a lot of sense. We're starting to see commonality in ways we hadn’t seen before.

Bennett: There’s very strong commonality there, and I’ll ask Doug to address that in a minute. When I described the best energy solution as being built-in, that really captured the essence of what we're doing with converged infrastructure. It’s not only integrating the elements of the data center, but better instrumenting them from a management and automation perspective. One of the key drivers for making management and automation decisions about provisioning and workload locations will be energy cost and consumption. Doug?

Oathout: Converged infrastructure is really about deploying IT in the optimal way to support a workload. We talk about energy and energy management. You're talking about doing the same thing. You want to deploy that workload to a server and storage networking environment that will do the amount of work you need with the least amount of energy.

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

An example of this is identifying what type of application you want to run on your infrastructure and then deploying the right amount of resources to run that application. You're not deploying more and not deploying less, but deploying the optimal amount of resources that you know that you are getting the best productivity for the energy budget that you have.

Adding resources

As that workload grows over time, you have the capability built into the software and into the monitoring, so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You want to keep it optimal over time. You don’t want to set up silos to start. You don’t want to set up over-provisioning to start. You want to be able to optimally run your infrastructure long-term. Therefore, you must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Gardner: Another trend in the data center nowadays is moving toward shared-services approaches, viewing yourself as a service provider, and billing based on these workloads and on the actual demand. It seems to me that energy needs to fit into that as well. Perhaps, as we think about private cloud, where we’ve got elasticity of resources, energy needs to be elastic, along with the workload allocation. So, quickly, John, what about the notion of shared services and how energy plays into that as well as this private cloud business?

Bennett: It definitely plays, as both you and Doug have highlighted. As one moves into a private cloud model, it accentuates the need to have a better real-time perspective of energy consumption and what devices consume and are capable of, in order to manage the assets of the private cloud efficiently and effectively. Whether you have a private cloud, providing a broader set of services, you clearly want to minimize your own cost structures. That's going to be for good energy management as well as other items. Doug?

Oathout: Yeah. With the private cloud implementation and how a converged infrastructure would support that is that you want to bring the amount of resources you need for an application online, but you also want to be able to have the resources available to run a separate set of applications and bring that on line as well.

The living and breathing of a data center is really what we're talking about with a private-cloud infrastructure on a converged infrastructure.



You're managing a group of resources as a pool, so that over time you can manage up resources to run a particular application and then manage them down and put the resources back into pool, so they can be deployed for another application.

The living and breathing of a data center is really what we're talking about with a private-cloud infrastructure on a converged infrastructure. That living and breathing capability is built within the processes and within the infrastructure, so that you can run applications in an optimal way.

Gardner: It's my understanding that some of the public-cloud providers nowadays have built their infrastructure with conservation in mind, because every penny counts when you're in a lower-margin shared service and providing services business. They can track every watt. They know where it's all going. They’ve built for that.

Now, what about some of these older organizations, five years plus? What can be done to retrofit what's out there to be more energy efficient? How does this work toward the older sets?

Oathout: The key to that, Dana, is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going. You can easily do that through a facility survey or a facility workshop, but the other thing you want to look at is your IT. As you’re upgrading your IT, all the new IT equipment -- whether it be servers or storage or networking -- has power management built into it and has reporting built into it.

Collect information

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

It's really using a process or an assessment to figure out how much energy you're using and where it's going and then deploying to this newer equipment with all the instrumentation built in, along with software to understand where your energy is going.

It's the way to get started but it's also the way to keep yourself in an automated way or keep yourself optimizing over time. You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Gardner: As I understand it now, we're talking about an initial payback, which would be identifying waste, hotspots, and right cooling approaches, getting some added capacity as a result, while perhaps also cutting cost. But, over time, there's a separate distinct payback, which is that you can control your operational costs and keep them at a lower percentage of your total cost of IT spend. Does that sound about right?

Oathout: That is right, Dana. You can actually decrease the slope of the energy curve. The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.

Over time, if you implement more efficient IT, you can actually decrease that slope to something much less than 11 percent growth. Also, as you increase your capacity in your data center in the same power envelope, you could actually start getting a much more efficient infrastructure running in the same power envelope, so you're actually getting to run that IT equipment for free energy, because you’ve freed up that energy from something else.

The idea of decreasing the slope or decreasing your budget is the start, but long term you're going to get more workload for the same budget. You can say the same thing for the IT management budget as well. You're trying to is get more efficiency out of your IT and out of your energy budget to support future workloads.

Gardner: And, the insight that you gain from implementing these sensors and tracking and automation, the ability to employ capacity-planning software, can bring out some hard numbers that allow you to be more predictable in understanding what your energy requirements will be, regardless of whether you are growing, staying the same, or even if you need to downsize your company.

Those numbers, that visibility, is something that can be applied to other assets allocations and important decisions in the enterprise around such things as perhaps carbon taxes and caps, as well as facilities, and even thinking about alternative energy sources.

Different approaches

Oathout: There are a lot of different ways to use green IT. We’ve seen customers implement a consolidation of infrastructure. They took a number of servers, a number of facilities associated with that server and storage environment, and minimize it down to a level that was very useable.

It gave the same service-level agreement (SLA) to their lines of businesses and they received energy credits from governments. They could then use those energy credits for monetary reasons or for conservation reasons. We also see customers, as they do these environmental changes or policies, look for ways that they can better demonstrate to their clients that they are being energy aware or energy efficient.

A lot of our clients use consolidation studies or energy efficiency studies as ways to show their clients that they are doing a very good job in their infrastructure and supporting them with the least possible environmental impact.

We see customers getting certificates, but also using energy consumption reductions as a way to show their clients that they’re being green or being environmentally friendly, just the same as you'd see a customer looking at a transportation company and how energy efficient they are in transporting goods. We see a lot of clients using energy efficiency in multiple ways.

Gardner: We've talked about Smart Grid for Data Centers several times. Now, let's drill down and describe exactly what it is. What are we talking about? What is HP offering in this category?

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.



Oathout: Smart Grids for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.

Gardner: John, I'm going to grasp for another analogy here, it sounds like, once again, we're up against governance. It's an important concept and topic, when it comes to how to properly do IT, but now we are applying it to energy.

Bennett: That's just the reflection of the fact that for any organization looking to get the most value out of their IT organization, their infrastructure, and operations they need to address governance, as much as they need to address the business services they're providing, as much as they need to address the infrastructure with how they deliver it and how they manage things like energy and security in that environment. It's all connected then.

Gardner: I wonder if we have any examples of how this has worked in practice. Within HP, itself, I assume that you want to cut your energy bills as much as anyone else does, particularly in a down economy or when a growth pattern hasn’t quite kicked in fully. Are there any examples within HP or some customers or clients that you have worked with?

Oathout: In the HP example, our IT organization has gone from 85 data centers down to 6. They've actually reduced the amount of budget we spent on IT from about 4 percent of our overall P&L down to about 2 percent. So, they've done a very good job consolidating and migrating the workload to a smaller set of facilities and a smaller set of infrastructure.

They're getting a huge floor saving capacity back, but are also getting a power saving of 66 percent, versus where they were two years ago.



They're now in the process of automating all that, so long term we will have a much more predictable IT workload from an energy perspective. They're implementing the software to control the energy. They're implementing power capping. They're implementing a converged infrastructure, so they have the ability to share resources amongst application. HP IT has really driven their cost down through this.

We have another example with the Sisters of Mercy Health System, which did a very similar convergence of infrastructure on a smaller scale. In their data center, they freed up 75 percent of their floor space by doing server consolidation, storage consolidation, and energy management. They now have 25 percent of the footprint they used to have from a server-storage physical standpoint, but they are also only using about 33 percent of the energy they used to use within their environment.

So, they're getting a huge floor saving capacity back, but are also getting a power saving of 66 percent, versus where they were two years ago. By doing this converged infrastructure, by doing consolidation, and then managing and capping the IT systems, they’ve got a much more predictable budget to run their IT infrastructure.

Gardner: I suppose getting started is a tough question, because you could get started so many different ways and there is such wide variability in how data centers are constructed and how old they are and what the characteristics are. I almost know the answer to this question so many different ways -- but how do you get started, depending on what your situation is at this particular time?

Efficiency analysis

Bennett: For many customers, if they're struggling to understand where energy is being consumed and how it's being used, we will probably recommend starting with an energy efficiency analysis. That will not only do a thorough evaluation of both the facility and the infrastructure, but provide insight into the kind of savings you can expect from the different types of investment opportunities to reduce energy costs. That’s the general starting point, if you are trying to understand just what’s going on with the energy.

Once you understand what you are doing with energy, then you can dive into looking at a Smart Grid for Data Center solution itself as something to take you even further. Doug, how do you get started with that?

Oathout: Another way to get started, John, is deploying new IT infrastructure. Our ProLiant servers, our Integrity servers, or our storage products have the instrumentation and the monitoring all built into the infrastructure. Deploying those new servers or storage environments allow you to get a picture of how much energy is being used by those, so you can have more predictable power usage going forward.

Customers are using virtualization. Customers are trying to get utilization of the servers and storage environment up to a very efficient level. Having the power management and the energy monitoring being built into those systems allows them to start laying out how much infrastructure they can support in their data center.

One of the keys for us is to start deploying the new pieces of HP IT equipment, which are fully instrumented and energy efficient. You'll have the snapshot of actual power consumption, and, if you upgraded your IT facilities over a longer period of time, you can get a full snapshot of your infrastructure. You can actually increase the capacity of the data center just by deploying the new products that are much more efficient than the ones three or four years ago.

There are places in the world, such as the UK or California, where the power you have coming into your facilities is all the power you are ever going to have. So, you really have to manage inside of that type of regulatory constraint.



Bennett: That’s a good example of this integrated roadmap idea behind DCT. I characterize it as modernization, consolidation, and virtualization. Really
it's, stepping up the capabilities or their infrastructure to both reduce cost, improve efficiencies, improve quality of service, and reduce the energy costs.

As Doug highlighted, after that phase of work is done, you've laid the ground work to consider taking advantage of that from an instrumentation and management point of view. You can augment that with further instrumentation of the racks and the data center resources in order to really implement a complete Smart Grid for Data Center solution. It's a stepping stone. It leverages the accomplishments done for other purposes to take you further into a good efficient operation.

Gardner: Based on some of the capacity improvements and savings, it certainly sounds like a no-brainer, but I have to imagine, John, that in the future, it's going to become less of an option and something that’s essentially mandatory.

An 11 percent annual growth in energy cost is not a sustainable trajectory. We have to expect that energy costs will be volatile, but, perhaps, over time more expensive, whether in real terms or when you factor in the added cost of taxation, carbon taxes and caps, and what have you. So, this is really something that has to be done. You might as well start sooner than later.

Bennett: Yes. And, regulations and governance from outside agencies is currently an issue. There are places in the world, such as the UK or California, where the power you have coming into your facilities is all the power you are ever going to have. So, you really have to manage inside of that type of regulatory constraint.

We have voluntary programs. Perhaps the most visible one is the European Data Center Code of Conduct, and clearly we expect to see more regulation of IT and facilities, in general, moving forward. Carbon reduction mandates impacting organizations are going to be external drivers behind doing this. Of course, if you get your hands ahead of the game, and you do this for business purposes, you will be well set to manage that when it comes.

Gardner: We've been talking about how to gain control over energy use and perhaps misuse in enterprise data centers. We were talking about how a Smart Grid approach, a comprehensive approach, using the available data to start creating capacity management capabilities, makes a tremendous amount of sense.

I want to thank our guests on this discussion. We've been joined by Doug Oathout,Vice President of Green IT Enterprise Servers and Storage at HP. Thank you, Doug.

Oathout: Thank you, Dana.

Gardner: We've also been joined by John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Thanks again, John.

Bennett: My pleasure, Dana. Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast. Thanks very much for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on implementing energy efficiency using smart grids in enterprise data centers to slash costs and gain added capacity. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: