Monday, December 21, 2009

HP's Cloud Assure for Cost Control Takes Elastic Capacity Planning to Next Level

Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.

Traditional capacity planning is not enough in cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.

We'll look at how to best right-size applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.

We'll also examine how quality control for these applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.

To unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. Welcome to BriefingsDirect, Neil.

Neil Ashizawa: Thanks very much, Dana.

Gardner: As we've been looking at cloud computing over the past several years, there is a long transition taking place of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.

Traditional capacity planning therefore needs to be refactored and reexamined. Tell me, if you could, Neil, why capacity planning, as people currently understand it, isn’t going to work in a cloud environment?

Ashizawa: Old-fashioned capacity planning would focus on the peak usage of the application, and it had to, because when you were deploying applications in house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes, with long procurement cycles, you'd have to plan for that.

In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.

Elasticity planning

The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low level usage case as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.

Gardner: So, this isn’t just a matter of spinning up an application and making sure that it could reach a peak load of some sort. We have a new kind of a problem, which is how to be efficient across any number of different load requirements?

Ashizawa: That’s exactly right. If you were to take, for instance, the old-school capacity-planning ideology to the cloud, what you would do is provision for your peak use case. You would scale up your elasticity in the cloud and just keep it there. If you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity and paying for only what you need at that moment.

If I'm at a slow period of my applications usage, then I don’t want to be over provisioned for my peak usage. One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.

Gardner: I suppose also the holy grail of this cloud-computing vision that we've all been working on lately is the idea of being able to spin up those required instances of an application, not necessarily in your private cloud, but in any number of third-party clouds, when the requirements dictate that.

Ashizawa: That’s correct.

Gardner: Now, we call that hybrid computing. Is what you are working on now something that’s ready for hybrid or are you mostly focused on private-cloud implementation at this point?

Ashizawa: What we're bringing to the market works in all three cases. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.

Gardner: HP announced, back in the spring of 2009, a Cloud Assure package that focused on things like security, availability, and performance. I suppose now, because of the economy and the need for people to reduce cost, look at the big picture about their architectures, workloads, and resources, and think about energy and carbon footprints, we've now taken this a step further.

Perhaps you could explain the December 2009 announcement that HP has for the next generation or next movement in this Cloud Assure solution set.

Making the road smoother

Ashizawa: The idea behind Cloud Assure, in general, is that we want to assist enterprises in their migration to the cloud and we want to make the road smoother for them.

Just as you said, when we first launched Cloud Assure earlier this year, we focused on the top three inhibitors, which were security of applications in the cloud, performance of applications in the cloud, and availability of applications in the cloud. We wanted to provide assurance to enterprises that their applications will be secure, they will perform, and they will be available when they are running in the cloud.

The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.

If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.

The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.

You can find yourself in a situation where your application is no longer right-sized in the cloud, because you have over-provisioned your compute resources.



If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.

When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.

Gardner: I think we need to have more insight into the nature of the application, rather than simply throwing additional instances of the application. Is that it at a very simple level?

Ashizawa: That’s it at a very simple level. Just to make it even simpler, applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.

Gardner: Can we do this tuning with existing applications -- you mentioned Java apps, for example -- or is this something for greenfield applications that we are creating newly for these cloud scenarios?

Java and .NET

Ashizawa: Our enhancement to Cloud Assure, which is Cloud Assure for cost control, focuses more on the Java and the .NET type applications.

Gardner: And those would be existing applications or newer ones?

Ashizawa: Either. Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.

Gardner: Is this new set software, services, both? Maybe you could describe exactly what it is that you are coming to market with.

Ashizawa: Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.

Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.



The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.

When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.

All of this is backed by services that HP SaaS provides. We can perform load testing. We can set up the monitoring. We can do the code level performance diagnostics, integrate that all into one console, and help customers right-size the applications in the cloud.

Gardner: That sounds interesting, and, of course, harkens back to the days of distributed computing. We're just adding another level of complexity, that is to say, a sourcing continuum of some sort that needs to be managed as well. It seems to me that you need to start thinking about managing that complexity fairly early in this movement to cloud.

Ashizawa: Definitely. If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.

Tuning the application

The nice thing about Cloud Assure for cost control is that, if you run into performance issues after you have gone live, you can still use the service. You could come in and we could help you right-size your application and help you tune it. Then, you can start getting the global scale you wish at the right cost.

Gardner: One of the other interesting aspects of cloud is that it affects both design time and runtime. Where does something like the Cloud Assure for cost control kick in? Is it something that developers should be doing? Is it something you would do before you go into production, or if you are moving from traditional production into cloud production, or maybe all the above?

Ashizawa: All of the above. HP definitely recommends our best practice, which is to do all your elasticity planning before you go into production, whether it’s a net new application that you are rolling out in the cloud or a legacy application that you are transferring to the cloud.

Given the elastic nature of the cloud, we recommend that you get out ahead of it, do your proper elasticity planning, tune your system, and right-size it. Then, you'll get the most optimized cost and predictable cost, so that you can budget for it.

One of the side benefits obviously to right-sizing applications and controlling cost is to mitigate risk.



Gardner: It also strikes me, Neil, that we're looking at producing a very interesting and efficient feedback loop here. When we go into cloud instances, where we are firing up dynamic instances of support and workloads for application, we can use something like Cloud Assure to identify any shortcomings in the application.

We can take that back and use that as we do a refresh in that application, as we do more code work, or even go into a new version or some sort. Are we creating a virtual feedback loop by going into something like Cloud Assure?

Ashizawa: I can definitely see that being that case. I'm sure that there are many situations where we might be able to find something inefficient within the code level layer or within the database SQL statement layer. We can point out problems that may not have surfaced in an on-premise type deployment, where you go to the cloud, do your elasticity planning, and right-size. We can uncover some problems that may not have been addressed earlier, and then you can create this feedback loop.

One of the side benefits obviously to right-sizing applications and controlling cost is to mitigate risk. Once you have elasticity planned correctly and once you have right-sized correctly, you can deploy with a lot more confidence that your application will scale to handle global class and support your business.

Gardner: Very interesting. Because this is focused on economics and cost control, do we have any examples of where this has been put into practice, where we can examine the types of returns? If you do this properly, if you have elasticity controls, if you are doing planning, and you get across this life cycle, and perhaps even some feedback loops, what sort of efficiencies are we talking about? What sort of cost reductions are possible?

Ashizawa: We've been working with one of our SaaS customers, who is doing more of a private-cloud type implementation. What makes this what I consider a private cloud is that they are testing various resource footprints, depending on the load level.

They're benchmarking their application at various resource footprints. For moderate levels, they have a certain footprint in mind, and then for their peak usage, during the holiday season, they have an expanded footprint in mind. The idea here is that, they want to make sure they are provisioned correctly, so that they are optimizing their cost correctly, even in their private cloud.

Moderate and peak usage

We have used our elastic testing framework, driven by Performance Center, to do both moderate levels and peak usage. When I say peak usage, I mean thousands and thousands of virtual users. What we allow them to do is that true elasticity planning.

They've been able to accomplish a couple of things. One, they understand what benchmarks and resource footprints they should be using in their private cloud. They know that they are provisioned perfectly at various load levels. They know that, because of that, they're getting all of the cost benefits of their private cloud At the end of the day, they're mitigating their business risk by ensuring that their application is going to scale to their global cost scale to support their holiday season.

Gardner: And, they're going to be able to scale, if they use cloud computing, without necessarily having to roll out more servers with a forklift. They could find the fabric either internally or with partners, which, of course, has a great deal of interest from the bean counter side of things.

Ashizawa: Exactly. Now, we're starting to relay this message and target customers that have deployed applications in the public cloud, because we feel that the public cloud is where you may fall into that trap of spinning up more resources when performance problems occur, where you might not get the return on your investment.

So as more enterprises migrate to the cloud and start sourcing there, we feel that this elasticity planning with Cloud Assure for cost control is the right way to go.

Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price.



Gardner: Also, if we're billing people either internally or through these third-parties on a per-use basis, we probably want to encourage them to have a robust application, because to spin up more instances of that application is going to cost us directly. So, there is also a built-in incentive in the pay-per-use model toward these more tuned, optimized, and planned-for cloud types of application.

Ashizawa: You said it better than I could have ever said it. You used the term pay-per-use, and it’s all about the utility-based pricing that the cloud offers. That’s exactly why this is so important, because whenever it’s utility based or pay-per-use, then that introduces this whole notion of variable cost. It’s obviously going to be variable, because what you are using is going to differ between different workloads.

So, you want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price.

Gardner: Neil, is this something that’s going to be generally available in some future time, or is this available right now at the end of 2009?

Ashizawa: It is available right now.

Gardner: If people were interested in pursuing this concept of elasticity planning, of pursuing Cloud Assure for cost benefits, is this something that you can steer them to, even if they are not quite ready to jump into the cloud?

Ashizawa: Yes. If you would like more information for Cloud Assure for cost control, there is a URL that you can go to. Not only can you get more information on the overall solution, but you can speak to someone who can help you answer any questions you may have.

Gardner: Let's look to the future a bit before we close up. We've looked at cloud assurance issues around security, performance, and availability. Now, we're looking at cost control and elasticity planning, getting the best bang for the buck, not just by converting an old app, sort of repaving an old cow path, if you will, but thinking about this differently, in the cloud context, architecturally different.

What comes next? Is there another shoe to fall in terms of how people can expect to have HP guide them into this cloud vision?

Ashizawa: It’s a great question. Our whole idea here at HP and HP Software-as-a-Service is that we're trying to pave the way to the cloud and make it a smoother ride for enterprises that are trying to go to the cloud.

So, we're always tackling the main inhibitors and the main obstacles that make it more difficult to adopt the cloud. And, yes, where once we were tackling security, performance, and availability, we definitely saw that this idea for cost control was needed. We'll continue to go out there and do research, speak to customers, understand what their other challenges are, and build solutions to address all of those obstacles and challenges.

Gardner: Great. We've been talking about moving from traditional capacity planning towards elasticity planning, and a series of announcements from HP around quality and cost controls for cloud assurance and moving to cloud models.

To better understand these benefits, we've been talking with Neil Ashizawa, manager of HP's SaaS Products and Cloud Solutions. Thanks so much, Neil.

Ashizawa: Thank you very much.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

You may also be interested in:

Friday, December 18, 2009

Careful Advance Planning Averts Costly Snafus in Data Center Migration Projects

Transcript of a sponsored BriefingsDirect podcast on proper planning for data-center transformation and migration.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the crucial migration phase when moving or modernizing data centers. So much planning and expensive effort goes into building new data centers or in conducting major improvements to existing ones, but too often there's short shrift in the actual "throwing of the switch" -- in the moving and migrating existing applications and data.

But, as new data center transformations pick up -- due to the financial pressures to boost overall IT efficiency -- so too should the early-and-often planning and thoughtful execution of the migration itself get proper attention. Therefore, our podcast at hand examines the best practices, risk mitigation tools, and requirements for conducting data center migrations properly, in ways that ensure successful overall data center improvement.

To help pave the way to making data center migrations come off nearly without a hitch, we're joined by three thought leaders from Hewlett-Packard (HP). Please join me in welcoming Peter Gilis, data center transformation architect for HP Technology Services. Welcome to the show, Peter.

Peter Gilis: Thank you. Hello, everyone.

Gardner: We're also joined by John Bennett, worldwide director, Data Center Transformation Solutions at HP. Welcome back, John.

John Bennett: Thank you very much, Dana. It's a delight to be here.

Gardner: Arnie McKinnis, worldwide product marketing manager for Data Center Modernization at HP Enterprise Services. Thanks for joining us, Arnie.

Arnie McKinnis: Thank you for including me, Dana. I appreciate it.

Gardner: John, tell me why migration, the process around the actual throwing of the switch -- and the planning that leads up to that -- are so essential nowadays?

New data centers

Bennett: Let's start by taking a look at why this has arisen as an issue. It makes the reasons almost self-evident. We see a great deal of activity in the marketplace right now of people designing and building new data centers. Of course for everyone who has successfully built their new data center, they have this wonderful new showcase site, and they have to move into it.

The reasons for this growth, the reasons for moving to other data centers, are fueled by a lot of different activities. Oftentimes, multiple factors come into play at the same organization.

In many cases it's related to growth. The organization and the business have been growing. The current facilities were inadequate for purpose, because of space or energy capacity reasons or because they were built 30 years ago, and so the organization decides that it has to either build a new data center or perhaps make use of a hosted data center. As a result, they are going to have to move into it.

It might be that they're engaged in a data-center strategy project as part of a data-center transformation, where they might have had too many data centers -- that was the case at Hewlett-Packard -- and consciously decided that they wanted to have fewer data centers built for the purposes of the organization. Once that strategy is put into place and executed, then, of course, they have to move into it.

We see in many cases that customers are looking at new data centers -- either ones they've built or are hosted and managed by others -- because of green strategy and green initiatives. They see that as a more cost-effective way for them to meet their green initiatives than to build their own data centers.

There are, of course, cost reductions. In many cases, people are investing in these types of activities on the premise that they will save substantial CAPEX and OPEX cost over time by having invested in new data centers or in data center moves.

Whether they're moving to a data center they own, moving to a data center owned and managed by someone else, or outsourcing their data center to a vendor like HP, in all cases you have to physically move the assets of the data center from one location to another.

The impact of doing that well is awfully high. If you don't do it well, you're going to impact the services provided by IT to the business. You're very likely, if you don't do it well, to impact your service level agreements (SLAs). And, should you have something really terrible happen, you may very well put your own job at risk.

So, the objective here is not only to take advantage of the new facilities or the new hosted site, but also to do so in a way that ensures the right continuity of business services. That ensures that service levels continue to be met, so that the business, the government, or the organization continues to operate without disruption, while this takes place. You might think of it, as our colleagues in Enterprise Services have put it, as changing the engine in the aircraft while it's flying.

Gardner: Peter, tell me, when is the right time to begin planning for this migration?

Migration is the last phase

Gilis: The planning starts, when you do a data-center transformation, and migration is actually the last phase of that data center transformation. The first thing that you do is a discovery, making sure that you know all about the current environment, not only the servers, the storage, and the network, but the applications and how they interact. Based on that, you decide how the new data center should look.

John, here is something where I do not completely agree with you. Most of the migrations today are not migration of the servers, the assets, but actually migration of the data. You start building a next-generation data center, most of the time with completely new assets that better fit what your company wants to achieve. This is not always possible, when your current environment is something like four or five years old, or sometimes even much older than that.

Gardner: Peter, how do you actually pull this off? How do you get that engine changed on the plane while keeping it flying? Obviously, most companies can't afford to go down for a week while this takes place.

Gilis: You should look at it in different ways. If you have a disaster strategy, then you have multiple days to recover. Actually, if you plan the disaster in a good fashion, then it will be easy to migrate.

On the other side, if you build your new engine, your new data center, and you have all the new equipment inside, the only thing that you need to do is migrate the data. There are a lot of techniques to migrate data online, or at least synchronize current data in the current data centers with the new data center.

Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.



So, the moment you switch off the computer in the first data center, you can immediately switch it on in the new data center. It may not be changing the engines online, but at least near-online.

Gardner: Arnie, tell me about some past disasters that have given us insights into how this should go properly? Are there any stories that come to mind about how not to do this properly?

McKinnis: There are all sorts of stories around not doing it properly. In most cases, you start doing the decompose of what went wrong during a project. Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.

It may even be as simple as looking at a changeover process that is currently in place seeing how that affects what is going to be the new changeover process. Potentially, there is some confusion. But it usually all goes back to not doing a proper assessment of the current mode of operations, or the current mode of that operating platform as it exists today.

Gardner: Now, Arnie, this must provide to you a unique opportunity -- as organizations are going to be moving from one data center to another -- to take a hard look at what they have got. I'm going to assume that not everything is going to go to the new data center.

Perhaps you're going to take an opportunity to sunset some apps, replace some with commodity services, or outsource others. So, this isn't just a one-directional migration. We're probably talking about a multi-headed dragon going in multiple directions. Is that the case?

Thinking it through

McKinnis: It's always the case. That's why, from Enterprise Services' standpoint, we look at it from who is going to manage it, if the client hasn't completely thought that out? In other words, potentially they haven't thought out the full future mode of what they want their operating environment to look like.

We're not necessarily talking about starting from a complete greenfield, but people have come to us in the past and said, "We want to outsource our data centers." Our next logical question is, "What do you mean by that?"

So, you start the dialog that goes down that path. And, on that path you may find out that what they really want to do is outsource to you, maybe not only their mission-critical applications, but also the backup and the disaster recovery of those applications.

When they first thought about it, maybe they didn't think through all of that. From an outsourcing perspective, companies don't always do 100 percent outsourcing of that data-center environment or that shared computing environment. It may be part of it. Part of it they keep in-house. Part of it they host with another service provider.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation. It's accessing what we currently have, but it's also designing what that future mode needs to look like.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation.



Gardner: Back to you, Peter. You mentioned the importance of data, and I imagine that when we go from traditional storage to new modes of storage, storage area networks (SANs) for example, we've got a lot of configuration and connection issues with how storage and data are used in conjunction with applications and processes. How do you manage that sort of connection and transformation of configuration issues?

Gilis: Well, there's not that much difference between local storage, SAN storage, or network attached storage (NAS) and what you designed. The only thing that you design or architect today is that basically every server or every single machine, virtual or physical, gets connected to a shared storage, and that shared storage should be replicated to a disaster recovery site.

That's basically the way you transfer the data from the current data centers to the new data centers, where you make sure that you build in disaster recovery capabilities from the moment you do the architecture of the new data center.

Gardner: Again, this must come back to a function of proper planning to do that well?

Know where you're going

Gilis: That's correct. If you don't do the planning, if you don't know where you're starting from and where you're going to, then it's like being on the ocean. Going in any direction will lead you anywhere, but it's probably not giving you the path to where you want to go. If you don't know where to go to, then don't start the journey.

Gardner: John Bennett, another tricky issue here is that when you transition from one organizational facility to another, or one sourcing set to another larger set, we're also dealing here with ownership trust. I guess that boils down to politics -- who controls what. We're not just managing technology, but we're managing people. How do we get a handle on that to make that move smoothly?

Bennett: Politics, in this case, is just the interaction and the interrelationship between the organizations and the enterprise. They're a fact of life. Of course, they would have already come into play, because getting approval to execute a project of this nature would almost of necessity involve senior executive reviews, if not board of director approval, especially if you're building your own data center.

But, the elements of trust come in, whether you're building a new data center or outsourcing, because people want to know that, after the event takes place, things will be better. "Better" can be defined as: a lot cheaper, better quality of service, and better meeting the needs of the organization.

This has to be addressed in the same way any other substantial effort is addressed -- in the personal relationships of the CIO and his or her senior staff with the other executives in the organization, and with a business case. You need measurement before and afterward in order to demonstrate success. Of course, good, if not flawless, execution of the data center strategy and transformation are in play here.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them.



The ownership issue may be affected in other ways. In many organizations it's not unusual for individual business units to have ownership of individual assets in the data center. If modernization is at play in the data center strategy, there may be some hand-holding necessary to work with the business units in making that happen. This happens whether you are doing modernization and virtualization in the context of existing data centers or in a migration. By the way, it's not different.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them. It's not rocket science, but it can be very painful sometimes.

Gardner: Again, it makes sense to be doing that early rather than later in the process.

Bennett: Oh, you have to do a lot of this before you even get approval to execute the project. By the time you get to the migration, if you don't have that in hand, people have to pray for it to go flawlessly.

Gardner: People don't like these sorts of surprises when it comes to their near and dear responsibilities?

Bennett: We can ask both Peter and Arnie to talk to this. Organizational engagement is very much a key part of our planning process in these activities.

Gardner: Arnie, tell us a little bit more about that process. The planning has to be inclusive, as we have discussed. We're talking about physical assets. We're talking about data, applications, organizational issues, people, and process. We haven’t talked about virtualization, but moving from physical to virtualized instances is also there. Give us a bit of a rundown of what HP brings to the table in trying to manage such a complex process.

It's an element of time

McKinnis: First of all, we have to realize that one of the things that happens in this whole process is that it's time. A client, at least when they start working with us from an outsourcing perspective, has come to the conclusion that they believe that a service provider can probably do it more efficiently and effectively and at a better price point than they can internally.

There are all sorts of decisions that go around that from a client perspective to get to that decision. In many cases, if you look at it from a technology standpoint, the point of decision is something around getting to an end of life on a platform or an application. Or, there is a new licensing cycle, either from a support standpoint or an operating system standpoint.

There is usually something that happens from a technology standpoint that says, "Hey look, we've got to make a big decision anyway. Do we want to invest going this way, that we have gone previously, or do we want to try a new direction?"

Once they make that decision, we look at outside providers. It can take anywhere from 12 to 18 months to go through the full cycle of working through all the proposals and all the due diligence to build that trust between the service provider and the client. Then, you get to the point, where you can actually make the decision of, "Yes, this is what we are going to do. This is the contract we are going to put in place." At that point, we start all the plans to get it done.

. . . There are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.



As you can see, it's not a trivial deal. We've seen some of these deals get half way through the process, and then the client decides, perhaps through personnel changes on the client side, or the service providers may decide that this isn't going quite the way that they feel it can be most successful. So, there are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.

There are lots of moving parts, and these things are usually very large. That's why, even though outsourcing contracts have changed, they are still large, are still multi-year, and there are still lots of moving parts.

When we look at the data center world, it just is one of those things where all of us take steps to make sure that we're always looking at the best case. We're always looking at what is the real case. We're always building toward what can happen and trying not to get too far ahead of ourselves.

This is little bit different than when you're just doing consulting and pure transformation and building that to the future environment. You can be a little bit more greenfield in your environment and the way you do things.

Gardner: I suppose the tendency is to get caught up in planning all about where you're ending up, your destination, and not focusing as much as you should on that all-important interim journey of getting there?

Keeping it together

McKinnis: From an outsourcing perspective, our organization takes it mostly from that state, probably more so than you could do in that future mode. For us, it's all about making sure that things do not fall apart while we are moving you forward. There are a lot of dual systems that get put in place. There are a lot of things that have to be kept running, while we are actually building that next environment.

Gilis: But, Arnie, that's exactly the same case when you don't do outsourcing. When you work with your client, and that's what it all comes down to, it should be a real partnership. If you don't work together, you will never do a good migration, whether it's outsourcing or non-outsourcing. At the end, the new data center must receive all of the assets or all of the data -- and it must work.

Most of the time, the people that know best how it used to work are the customers. If you don't work with and don't partner directly with the customer, then migration will be very, very difficult. Then, you'll hit the difficult parts that people know will fail, and if they don't inform you, you will have to solve the problem.

Gardner: Peter, as an architect, you must see that these customers you're dealing with are not all equal. There are going to be some in a position to do this better than others. I wonder whether there's something that they've done or put in place. Is it governance, change management, portfolio management, or configuration databases with a common repository of record? Are there certain things that help this naturally?

You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily.



Gilis: As you said, there are different customers. You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily. As we say, "Cut the elephant in pieces, because otherwise you can't swallow it."

Gardner: But, even the elephant itself might differ. How about you, John Bennett? Do you see some issues where there is some tendency toward some customers to have adopted certain practices, maybe ITIL, maybe service-oriented architecture (SOA), that make migration a bit smoother?

Bennett: There are many ways to approach this. Cutting up the elephant so you can eat it is a more interesting way of advising customers to build out their own roadmap of projects and activities, but, in the end, implement their own transformation.

In an ideal data center project, because it's such a significant effort, it's always very useful to take into consideration other modernization and technology initiatives, before and during, in order to make the migration effective.

For example, if you're going to do modernization of the infrastructure, have the new infrastructure housed in the new data center, and now you are just migrating data and applications instead of physical devices, then you have much better odds of it happening successfully.

Cleaning up internally

If you can do work with your applications or your business processes before you initiate the move, what you are doing is cleaning up the operations internally. Along the way, it's a discovery process, which Peter articulated as the very first step in the migration project. But, you're making the discovery process easier, because there are other activities you have to do.

Gardner: A lot of attention is being given to cloud computing at almost abstract level, but not too far-fetched. Taking advantage of cloud computing means being able to migrate a data center; large chunks of that elephant moving around. Is this something people are going to be doing more often?

Bennett: It's certainly a possibility. Adopting a cloud strategy for specific business services would let you take advantage of that, but in many of these environments today cloud isn't a practical solution yet for the broad diversity of business services they're providing.

We see that for many customers it's the move from dedicated islands of infrastructure, to a shared infrastructure model, a converged infrastructure, or an adaptive infrastructure. Those are significant steps forward with a great deal of value for them, even without getting all the way to cloud, but cloud is definitely on the horizon.

What we're moving toward, if done properly, is a breaking off, especially in the enterprise, of the security and compliance issues around data.



Gardner: Can we safely say, though, that we're seeing more frequent migrations and perhaps larger migrations?

McKinnis: In general, what we've seen is the hockey stick that's getting ready to happen with shared compute. I'll just throw it out there as what this stuff is in the data centers, kind of a shared-compute environment. What we're moving toward, if done properly, is a breaking off, especially in the enterprise, of the security and compliance issues around data.

There is this breaking off of what can be done, what should be done at the desktop or user level, what should be kept locally, and then what should be kept at a shared compute or a shared-services level.

Gardner: Perhaps we're moving toward an inflection point, where we're going to see a dramatic uptake in the need for doing migration activities?

McKinnis: I think we will. Cloud has put things back in people's heads around what can be put out there in that shared environment. I don't know that we've quite gotten through the process of whether it should be at a service provider location, my location, or within a very secure location at an outsourced environment.

Where to hold data

I don't think they've gotten to that at the enterprise level. But, they're not quite so convinced about giving users the ability to retain data and do that processing, have that application right there, held within that confinement of that laptop, or whatever it happens to be that they are interacting with. They're starting to see that it potentially should be held someplace else, so that the risk of that data isn't held at the local level. Do you understand where I am going with that?

Gardner: I do. I think we are seeing greater responsibility now being driven toward the data center, which is going to then force the re-architecting and the capacity issues, which will ultimately then require choices about sourcing, which will then of course require a variety of different migration activities.

McKinnis: Right. It's not just about a new server or a new application. Sometimes it's as much about, "How do I stay within compliance? Am I a public company or am I am a large government entity? How do I stay within my compliance and my regulations? How do I hold data? How do I have to process it?"

Even in the world of global service delivery, there are a lot of rules and regulations around where data can be stored. In that leveraged environment that a service provider provides, potentially storage is in somewhere in Eastern Europe, India, or in South America. There are plenty of compliance issues around where data can actually be held within certain governmental regulations, depending on where you are -- in country or out of country.

Planning is key -- not only planning the migration itself, but also doing "plan B" -- what if it doesn't work -- because then you have to go back to the old rule as soon as possible and within the time frame given.



Gardner: Let's move to Peter. Tell me a bit about some examples. Moving back to the migration itself, can you give us a sense of how this is done well, and if there are some metrics of success, when it is done well?

Gilis: As we already said in the beginning, it all depends on planning. Planning is key -- not only planning the migration itself, but also doing "plan B" -- what if it doesn't work -- because then you have to go back to the old rule as soon as possible and within the time frame given.

First, you need to plan, "Is my application suitable for a migration?" Sometimes, if you migrate your data centers from place A to place B -- as we've done in EMEA, from Czech Republic to Austria -- the distance of 350 kilometers gives an extra latency. If your programs, and we have tested them for the customer, already have performance problems, the little extra latency can just kill your program when you migrate.

One of the things we have done in that case is that we've tested it using a network simulator on a real-life machine. We found that the application was not adaptive, or the server was not adaptive for migration. If you know this beforehand, then you remove a risk by just migrating it on its own.

In another customer I saw that people had divided the whole migration process into multiple streams, but there was a lack of coordination between the streams. This means that if you have a shared application related to more than one stream, the planning of the one stream was totally in conflict with the planning of another stream. This means that the application and the data moved without informing the other streams, causing huge delays in real life, because the other applications were not synchronized anymore in the same way they used to be, assuming they were synchronized before.

So, if you don't plan and work together, you will definitely have failures.

Gardner: You mentioned something that was interesting about trying to do this on a test basis. I suppose that for that application development process, you'd want to have a test and dev and use some sort of a testbed, something that's up before you go into full production. Perhaps we also want to put some of these servers, data sets, and applications through some sort of a test to see if they are migration ready. Is that an important and essential part of this overall process?

Directly to the site

Gilis: If you can do it, it's excellent, but sometimes we still see in real life that not all customers have a complete test and dev environment, or not even an acceptance environment. Then, the only way to do it is to move the real-life machine directly to the new site.

I've actually seen it. It wasn't really a migration, but an upgrade of an SAP machine. Because of performance problems, the customer needed to migrate to a new, larger server. And, because of the pressure of the business, they didn't have time to move from test and dev, to acceptance, and to production. They started immediately with production.

At two o'clock in the morning we found that there was a bug in the new version and we had to roll back the whole migration and the whole upgrade. That's not the best time in the middle of the weekend.

Gardner: John Bennett, we've heard again and again today about how important it is to do this planning, to get it done upfront, and to get that cooperation as early as possible. So the big question for me now is how do you get started?

Bennett: How you get started depends on what your own capabilities and expertise are. If these are projects that you've undertaken before, there's no reason not to implement them in a similar manner. If they are not, it starts with the identification of the business services and the sequencing of how you want them to be moved into the new data center and provisioned over there.

We have successfully undertaken customer data center migration projects, which had minimal or zero operational disruption, by making clever use of short-term leases to ensure that business services continue to run, while they are transitioned to a new data center.



In order to plan that level of detail, you need to have, as Peter highlighted earlier, a really good understanding of everything you have. You need to fully build out a model of the assets you have, what they are doing, and what they are connected to, in order to figure out the right way to move them. You can do this manually, or you can make use of software like HP's Discovery and Dependency Mapping software.

If the size of this project is a little daunting to you, then of course the next step is to take advantage of someone like HP. We have Discovery Services, and, of course, we have a full suite of migration services available, with people trained and experienced in doing this to help customers move and migrate data centers, whether it's to their own or to an outsourced data center.

Peter talked about planning this with a disaster in mind to understand what downtime you can plan for. We have successfully undertaken customer data center migration projects, which had minimal or zero operational disruption, by making clever use of short-term leases to ensure that business services continue to run, while they are transitioned to a new data center. So, you can realize that too.

But, I'd also ask both Peter and Arnie here, who are much more experienced in this, to highlight the next level of detail. Just what goes into that effective planning, and how do you get started?

Gardner: I'd also like to hear that, Peter. In the future, I expect that, as always, new technologies will be developed to help on these complex issues. Looking forward, are there some hopeful signs that there is going to be a more automated way to undertake this?

Migration factory

Gilis: If you do a lot of migrations, and that's actually what most of the service companies like HP are doing, we know how to do migrations and how to treat some of the applications migrated as part of a "migration factory."

We actually built something like a migration factory, where teams are doing the same over and over all the time. So, if we have to move Oracle, we know exactly how to do this. If we have to move SAP, we know exactly how to do this.

That's like building a car in a factory. It's the same thing day in and day out, everyday. That's why customers are coming to service providers. Whether you go to an outsourcing or non-outsourcing, you should use a service provider that builds new data centers, transforms data centers, and does migration of data centers nearly every day.

Gardner: I'm afraid we're just about out of time and we're going to have to leave it there. I want to thank our guests for an insightful set of discussion points around data center migration.

As we said earlier, major setups and changes with data-center facilities often involve a lot of planning and expense, but sometimes not quite enough planning goes into the migration itself. Here to help us better understand and look towards better solutions around data center migration, we have been joined by Peter Gilis, data center transformation architect for HP Technology Services. Thanks so much, Peter.

Gilis: Thank you.

Gardner: Also John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks, John.

Bennett: You're most welcome, Dana.

Gardner: And lastly, Arnie McKinnis, worldwide product marketing manager for Data Center Modernization in HP Enterprise Services. Thanks for your input, Arnie.

McKinnis: Thank you, Dana. I've enjoyed being included here.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a sponsored BriefingsDirect podcast on proper planning for data-center transformation and migration. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, December 17, 2009

Executive Interview: HP's Robin Purohit on How CIOs Can Contain IT Costs While Spurring Innovation Payoffs

Transcript of a BriefingsDirect podcast with HP's Robin Purohit on the challenges that CIOs face in the current economic downturn and how to prepare their businesses for recovery.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast executive interview that focuses on implementing the best methods for higher cost optimization in IT spending. [See HP news from Software Universe on cloud enablement technologies.]

To better define the challenges facing CIOs today, and to delve into what can help them properly react, we are here with Robin Purohit, Vice President and General Manager for HP Software and Solutions. Welcome back to BriefingsDirect, Robin.

Robin Purohit: Wonderful to be here again with you, Dana.

Gardner: Clearly, the cost-containment conundrum of "do more for less" -- that is, while still supporting all of your business requirements -- is going to be with us for quite some time. I wonder, Robin, how are CIOs reacting to this crisis, now that we're more than a full year into it?

Purohit: Well, just about every CIO I've talked to right now is in the middle of planning their next year’s budget. Actually, it's probably better to say preparing for the negotiation for next year’s budget. There are a couple of things.

The good news is that this budget cycle doesn’t look like last year’s. Last year’s was very tough, because the financial collapse really was a surprise to many companies, and it required people to very quickly constrain their capital spend, their OPEX spend, and just turn the taps off pretty quickly.

We saw a lot of CIOs getting surprised toward the end of year 2009 and the beginning of year 2010, and just having to stop things, even things that they knew were critical to their organization’s success, and critical to their business success.

So, the good news is that we're not in that situation anymore, but it's still going to be tough. What we hear from CIO Magazine is that about two-thirds of the companies out there plan to either have flat or down IT budgets for next year. A small amount are trying to actually increase spend.

Every CIO needs to be extremely prepared to defend their spend on what they are doing and to make sure they have a great operational cost structure that compares to the best in their industry.

They need to be able to prepare to make a few big bets, because the reality is that the smartest companies out there are using this downturn as an advantage to make some forward looking strategic bets. If you don't do that now, the chances are that, two years from now, your company could be in a pretty bad position.

Gardner: Given that we are either flat or refining still and that this might last right through 2010, it means we have to look at capital spending. I think probably a lot of costs are locked in or have been already dealt with. When it comes to this issue of capital spending, how are these budgets being managed?

Important things

Purohit: Well, with capital spend, there are a couple of pretty important things to get done. The first is to have an extremely good view of the capital you have and where it is in the capital cycle.

You need to know what can be extended in terms of its life, what can be reused and what has to be refreshed? Then, when you do refresh it, there are some great new ways of actually using capital on server storage and networking that's at a much lower cost structure, and much easier to operate, than the systems we had three or four years ago.

Quite frankly, we see a lot of organizations still struggling to know what they have, who is using it, what they are using it for, and where it is in the capital life cycle. Getting all of that information that is timely, accurate, and at your fingertips, so you can enter the planning cycle, is extraordinarily important and fundamental.

Gardner: It certainly seems that the capital spending you do decide on should be of a transformational nature. Is that fair?

Purohit: Yes, it's true. I should have said that. Capital, as we all know is not only the hardware, but software. A lot of our customers are taking a hard look at the software licenses they have to make sure they are being used in the best possible way.

"Today's innovation is tomorrow’s operating cost."



Now, the capital budget that you can secure needs to be used in very strategic ways. We usually advise customers to look in two buckets.

One, when you are going to deploy new capital, always make sure that it's going to be able to be maintained and sustained in the lowest-cost way. The way we phrase this is, "Today's innovation is tomorrow’s operating cost."

In the past, we’ve seen mistakes made, where people deployed new capital without really thinking how they were going to drive the long-term cost structure down in operating that new capital. So that's the first thing.

The second is, the company wants to see the CIO use capital to support the most important business initiatives they have, and usually they are associated with revenue growth, by expanding the sales force, and new business units, some competitive program, or eventually a new ecommerce presence.

New business agenda

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.

They are clearly in bucket that either dramatically lowers the ongoing cost structure of new technologies, or clearly rides the capital spend with something that a line of business executive is trying to do over the next two or three years. They have the best chance of getting what they think is really necessary.

Gardner: It seems that in order to know whether your spend is transformational, you need to gain that financial transparency, have a better sense of the true cost and true inventory, and move toward the transformational benefits. But, then you also need to be able to measure them, and I think we are all very much return-on-investment (ROI) minded these days. How do we reach that ability to govern and measure once we put things into place?

Purohit: It's a great point. The reality is the CIO has been a bit of a cobbler’s child for some time. They've done a great job putting in systems and applications that support the business, so that a sales executive or a business unit executive has all of the business process automation and all of the business information at their fingertips in real-time to go, and to be competitive and be aggressive in the marketplace.

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.



CIOs traditionally have not had that same kind of application. While they can go through a manual and pretty brutal process to collect all this information, they haven’t had that real-time financial information, not only in what they have or plan to do, but to track, on an almost weekly basis, their spend versus plan.

I guess, all the CFO cares about is whether you are on track on your financial variance, and if you aren’t on track, what are you doing to real-time optimize to the changing realities of the budget that are adjusting monthly these days for most CIOs.

This is where we really see an opportunity. They help customers put in place IT financial management solutions, which are not just planning tools -- not just understanding what you have -- but essentially a real-time financial analytic application that is timely and accurate as an enterprise resource planning (ERP) system, or a business intelligence (BI) system that's supporting the company’s business process.

Gardner: If we have a ratio in many organizations where we have 70 percent roughly for maintenance and support and 30 percent for innovation, we're going to need to take from Peter to pay Paul here.

What is it that you can do on that previously "untouchable" portion of the budget? How can we free up capacity in the data-center, rather than build a new data center, for example?

It's a cyclical thing

Purohit: The joke I like to tell about the 70:30 ratio is that, unfortunately, we've been talking about that same ratio for about 10 years. So, somebody is not doing something right. But, the realty is that it's a cyclical thing. Today’s innovation is tomorrow's maintenance.

It's important to realize that there are cycles where you want to move the needle and there are cycles where you can't. Right now, we are in a cycle where every CIO needs to be moving that 70:30 to 30:70. That's because, first of all, they'll be under cost pressure. I really believe that the leaders of tomorrow in the business world are going to be created during the downturn. That's historically what we’ve seen. McKinsey has some good write-ups about that.

It means that you need to be driving as much innovation as possible and getting to that 70 percent. Now, in terms of how you do that, it's making sure that the capital spend that you have, that everything in the data center you have, is supporting a top business priority. It's the most important thing you can do.

One thing that won't change is that demand from the business will all of a sudden strip your supply of capital and labor. What you can do is make sure that every person you have, every piece of equipment you have, every decision you are making, is in the context of something that is supporting an immediate business need or a key element of business operation.

When we work with a lot of customers, we help them do that assessment, and I'll give you one example. A utility company I worked with was able to identify up to $37 million of operational and capital cost savings in the first couple of years just by limiting stuff that wasn't critical to the business.

It also means there are more things and more new things to manage.



There are lots of opportunities to be disciplined in assessing your organization, both in how you spend capital, how you use your capital, and what your people are working on. I wouldn't call it waste, but I would call it just a better discipline and whether what you're doing truly is business critical or not.

Gardner: I suppose having that financial visibility and transparency that allows that triage to take place, and then moving towards this flip of 70:30 ratio, we have to involve people and process and not just technology, right?

Purohit: That's right. If you don't get the people and process right, then new technologies, like virtualization or blade systems, are just going to cause more headaches downstream, because those things are fantastic ways of saving capital today. Those are the latest and greatest technologies. Four or five years ago, it was Linux and Windows Server.

It also means there are more things and more new things to manage. If you don't have extremely disciplined processes that are automated, and if you don't have all of your team with one play book on what those processes are, and making sure that there is a collaborative way for them to work on those processes, and which is as automated as possible, your operating costs are just going to increase as you embrace the new technologies that lower your capital. You've got to do both at the same time.

Gardner: Now, HP Software Solutions has been describing this as operational advantage, and that certainly sounds like you're taking into consideration all the people and process, as well as the technology. Tell me a little more about what you’ve been doing in the past several months and how this will impact the market in 2010.

Best in class

Purohit: We talk about operational advantage, we talk first of all about getting close to a best-in-class benchmark of your IT costs as a percentage of your company’s revenue.

They say close to best, because you never want to race to the bottom and be the lowest-cost provider, if you want to be strategic. But, you'd better be close. Otherwise, your CFO is going to be breathing down your neck with lots of management consultants asking why you are not there.

The way you get there is through a couple of key steps that we have been recommending. First and foremost, you have to standardize and automate as much as you can.

The great news is that, right now, there is really sophisticated technology that we have. Many companies have to apply this to this problem, where you can take a lot of the stuff that you know how to do every day and that involves a lot of people, and eventually a lot of manual work that could be done incorrectly, if you are not careful.

Standardize and automate them to make sure they get done in a very efficient way, in the cheapest possible way, and in the same way every time. We've seen customers take $10 million of operating cost out in 6 to 9 months just by automating what they know to do and what they know they need to do repeatably every time.

We’ve done a ton of work to roll out and automate those best practices, and how to get the advantages of faster innovation using Agile development, without creating a bunch of risks as you move faster.



The second thing that we really work on with people is getting that financial visibility, and getting all of their financial information on labor, projects, capital, and plans in one place, with one data model, so that they have a coherent way to plan and optimize their spends.

Those two things are huge levers. The third thing that we've really started to work with people on is all of these innovation projects, which are really brand new innovative techniques, like Agile development?

How do you make that labor tool extremely effective using that new technique like Agile development? We’ve done a ton of work to roll out and automate those best practices, and how to get the advantages of faster innovation using Agile development, without creating a bunch of risks as you move faster. Those are three really fundamental elements of what we're doing right now.

Gardner: I suppose that when you are picking on something quite as complex as this, you need to have some goal and some vision and direction about what are the realistic goals for some of these cost optimization activities. You mentioned the percent of revenue for IT spend as one gauge. What sort of results do you think people can meaningfully and realistically get in terms of some of these larger metrics?

Important goal

Purohit: We've seen the best companies actually implement this swap from 30:70 to 70:30. So, getting to 30 percent of your spend on operating costs in this cycle, where you need to be investing for the future, is absolutely an achievable and important goal. The second thing is to make sure that you’re benchmarking yourself on this cost of IT versus revenue on the most important competitors in your industry.

The reason that I phrased it that way is that it's not a general benchmark and it's not just the lowest-cost provider in your geography or industry, but you want to know what your most important competitor is doing, using technology as an advantage for both cost structure and innovation.

You want to understand that, spending probably something similar to that, and then hopefully be smarter than them in how you implement that strategy. Those are two really important things.

The third thing is that, depending where your IT organizational maturity is, there are opportunities to take out as much as 5 to 10 percent of your operating cost just by being more disciplined.

Say that you're a new CIO coming to organization and you see a lack of standardization, a lack of centers of excellence, and a lot of growth through merger and acquisition, there is a ton of opportunity to take out operating cost.

We've seen customers generally take out 5 to 10 percent, when a new CIO comes on board, rationalizes everything that's being done, and introduces rigorous standardization. That's a quick win, but it's really there for companies that have been probably a little earlier in the maturity cycle of how they run IT.

The same thing is happening now that happened in 2001, when we had our last major downturn. In 2001, we saw a rise of outsourcing and off-shoring, particularly to places like India.



Gardner: Another way of reducing this percentage of total revenue, I have to imagine, from all the interest in cloud computing these days, comes from examining and leveraging, when appropriate, a variety of different new sourcing options, both, new and old. How does that relate to this cost optimization equation?

Purohit: That's a great point. The same thing is happening now that happened in 2001, when we had our last major downturn. In 2001, we saw a rise of outsourcing and offshoring, particularly to places like India.

That really helped companies to lower their cost structure of their labor dramatically and really assess whether they needed to be doing some of these things in-house. So, that clearly remains as an option. In fact, most companies have figured out how to do that already. Everybody has a global organization that moves the right labor to the right cost structure.

A couple of new things that are possible now with the outsourcing model and the cloud model -- whether you want to call it cloud or software as a service (SaaS) -- is that there's an incredibly rich marketplace of boutique service shops and boutique technology providers that can provide you either knowledge or technology services on-demand for a particular part of your IT organization.

That could be a particular application or a business process. It could be a particular pool of knowledge in running your desktop environment. There's really an incredible range of options out there.

Questions for the CIO

What every CIO needs to be doing is standing back and saying, "What do we really need to be the best at, and where is critical intellectual property that we have to own?" If you're not running at the best possible cost structure for that particular application or business process or you're not operating this infrastructure at the best possible cost structure, then why don't we give it to somebody else who can do a better job?

The cost structures associated with running infrastructure as a service (IaaS) are so dramatically lower and are very compelling, so if you can find a trusted provider for that, cloud computing allows you to move at least markets that are lower risk to experiment with those kind of new techniques.

The other nice thing we like about cloud computing is that there is at least a perception that is going to be pretty nimble, which means that you'll be able to move services in and out of your firewall, depending on where the need is, or how much demand you have.

It will give you a little bit of agility to respond to the changing needs of the business without having to go through a long capital-procurement cycle. The only thing I would say about cloud is be cautious, because it's still early, and we're seeing a lot of experimentation.

The most important thing is to pick cloud providers that you can trust, and make sure that your line of business people and people in your organization, when they do experiment, are still putting in the right governance approach to make sure that what's going out there is something that doesn’t introduce extra risk to your business.

There’s a lot of diligence that needs to be put in place, so that cloud becomes less an experiment and more a critical element of how you can address this cost-structure issue.



Trust your provider, if you are putting data out there in the cloud. Do you trust how that data is being handled? If that cloud infrastructure is part of a business critical service, how are you measuring it to make sure that it's actually supporting the performance availability security needs of what the business needs?

There’s a lot of diligence that needs to be put in place, so that cloud becomes less an experiment and more a critical element of how you can address this cost-structure issue.

Gardner: Now, when we talk about cost structure, I would think that's even more critical for these cloud providers in order for them to pass along the savings. They themselves must put into place many of the things we have talked about today.

Purohit: That's right. Cloud providers have to push the needle right to the edge in order to compete. They're using the best possible new technology around blade computing, virtualization, automating everything, new service oriented architecture (SOA) technologies, so that you can do small component applications and stitch them together super fast.

The right governance

That's the value that they're providing. Then, the challenge is that you've got to make sure that not only do they have the great innovation, and great cost structure, but you trust what they are doing and that they have the right governance around it. I think that's really going to be what separates the lowest-cost cloud providers from the ones that you want to bet your business on.

Gardner: Is there anything else you want to offer in terms of thinking about cost optimization and how to get through the next year or two, where we are flipping ratios but are also maintaining lower total cost?

Purohit: I want to go back to this innovation bucket, because, as I said, you don't want to come out of this cycle as a CIO who was associated only with lowering cost and didn't fundamentally move the needle out, making the business more competitive.

You have limited ability to make those bets. So, the best bets are ones that are very prevalent, very top of mind for the business executives who really change the dynamic in terms of competitiveness, sales productivity, or the way they engage their customers.

The most consistent project that we have seen, the kind of project we see out there that are good bets for those innovation dollars, is around a theme you call application modernization.



The most consistent project that we have seen, the kind of project we see out there that are good bets for those innovation dollars, is around a theme you call application modernization.

What's happening right now in the industry is what we believe is the biggest revolution in application technology of the enterprise in probably 10 years. That's a composite of things that you build applications with these new Agile development methods.

All of these rich Internet protocols are revolutionizing the way you visualize and interact with applications, crossing over from the consumer world into the enterprise world. A whole, new wave of application platform technology is being introduced by SAP, Oracle, and Microsoft. And, SOA is becoming very real, so that you can actually integrate these applications very quickly.

Our view is that the companies who use this opportunity to modernize their applications and have this rich interactive visual experience, where they can nimbly integrate various application components to innovate and to interact with their customers or their sales people better, are the ones that are going to emerge from this downturn as the most successful leveraging technology to win in the marketplace.

We really encourage customers to take a very hard look at application modernization, and are helping them get there with those scarce innovation dollars that they have.

Gardner: Very good. We've been discussing the need for implementing best methods and achieving higher cost optimization by looking at reverse ratios from maintenance and support to innovation and transformation. Helping us along our journey in our discussion, we've been joined by Robin Purohit, the General Manager and Vice President for HP Software Solutions. Thanks so much, Robin.

Purohit: Thanks, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP's Robin Purohit on the challenges that CIOs face in the current economic downturn and how to prepare their businesses for recovery. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

You may also be interested in: