Showing posts with label Hewlett-Packard. Show all posts
Showing posts with label Hewlett-Packard. Show all posts

Thursday, December 17, 2009

Executive Interview: HP's Robin Purohit on How CIOs Can Contain IT Costs While Spurring Innovation Payoffs

Transcript of a BriefingsDirect podcast with HP's Robin Purohit on the challenges that CIOs face in the current economic downturn and how to prepare their businesses for recovery.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast executive interview that focuses on implementing the best methods for higher cost optimization in IT spending. [See HP news from Software Universe on cloud enablement technologies.]

To better define the challenges facing CIOs today, and to delve into what can help them properly react, we are here with Robin Purohit, Vice President and General Manager for HP Software and Solutions. Welcome back to BriefingsDirect, Robin.

Robin Purohit: Wonderful to be here again with you, Dana.

Gardner: Clearly, the cost-containment conundrum of "do more for less" -- that is, while still supporting all of your business requirements -- is going to be with us for quite some time. I wonder, Robin, how are CIOs reacting to this crisis, now that we're more than a full year into it?

Purohit: Well, just about every CIO I've talked to right now is in the middle of planning their next year’s budget. Actually, it's probably better to say preparing for the negotiation for next year’s budget. There are a couple of things.

The good news is that this budget cycle doesn’t look like last year’s. Last year’s was very tough, because the financial collapse really was a surprise to many companies, and it required people to very quickly constrain their capital spend, their OPEX spend, and just turn the taps off pretty quickly.

We saw a lot of CIOs getting surprised toward the end of year 2009 and the beginning of year 2010, and just having to stop things, even things that they knew were critical to their organization’s success, and critical to their business success.

So, the good news is that we're not in that situation anymore, but it's still going to be tough. What we hear from CIO Magazine is that about two-thirds of the companies out there plan to either have flat or down IT budgets for next year. A small amount are trying to actually increase spend.

Every CIO needs to be extremely prepared to defend their spend on what they are doing and to make sure they have a great operational cost structure that compares to the best in their industry.

They need to be able to prepare to make a few big bets, because the reality is that the smartest companies out there are using this downturn as an advantage to make some forward looking strategic bets. If you don't do that now, the chances are that, two years from now, your company could be in a pretty bad position.

Gardner: Given that we are either flat or refining still and that this might last right through 2010, it means we have to look at capital spending. I think probably a lot of costs are locked in or have been already dealt with. When it comes to this issue of capital spending, how are these budgets being managed?

Important things

Purohit: Well, with capital spend, there are a couple of pretty important things to get done. The first is to have an extremely good view of the capital you have and where it is in the capital cycle.

You need to know what can be extended in terms of its life, what can be reused and what has to be refreshed? Then, when you do refresh it, there are some great new ways of actually using capital on server storage and networking that's at a much lower cost structure, and much easier to operate, than the systems we had three or four years ago.

Quite frankly, we see a lot of organizations still struggling to know what they have, who is using it, what they are using it for, and where it is in the capital life cycle. Getting all of that information that is timely, accurate, and at your fingertips, so you can enter the planning cycle, is extraordinarily important and fundamental.

Gardner: It certainly seems that the capital spending you do decide on should be of a transformational nature. Is that fair?

Purohit: Yes, it's true. I should have said that. Capital, as we all know is not only the hardware, but software. A lot of our customers are taking a hard look at the software licenses they have to make sure they are being used in the best possible way.

"Today's innovation is tomorrow’s operating cost."



Now, the capital budget that you can secure needs to be used in very strategic ways. We usually advise customers to look in two buckets.

One, when you are going to deploy new capital, always make sure that it's going to be able to be maintained and sustained in the lowest-cost way. The way we phrase this is, "Today's innovation is tomorrow’s operating cost."

In the past, we’ve seen mistakes made, where people deployed new capital without really thinking how they were going to drive the long-term cost structure down in operating that new capital. So that's the first thing.

The second is, the company wants to see the CIO use capital to support the most important business initiatives they have, and usually they are associated with revenue growth, by expanding the sales force, and new business units, some competitive program, or eventually a new ecommerce presence.

New business agenda

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.

They are clearly in bucket that either dramatically lowers the ongoing cost structure of new technologies, or clearly rides the capital spend with something that a line of business executive is trying to do over the next two or three years. They have the best chance of getting what they think is really necessary.

Gardner: It seems that in order to know whether your spend is transformational, you need to gain that financial transparency, have a better sense of the true cost and true inventory, and move toward the transformational benefits. But, then you also need to be able to measure them, and I think we are all very much return-on-investment (ROI) minded these days. How do we reach that ability to govern and measure once we put things into place?

Purohit: It's a great point. The reality is the CIO has been a bit of a cobbler’s child for some time. They've done a great job putting in systems and applications that support the business, so that a sales executive or a business unit executive has all of the business process automation and all of the business information at their fingertips in real-time to go, and to be competitive and be aggressive in the marketplace.

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.



CIOs traditionally have not had that same kind of application. While they can go through a manual and pretty brutal process to collect all this information, they haven’t had that real-time financial information, not only in what they have or plan to do, but to track, on an almost weekly basis, their spend versus plan.

I guess, all the CFO cares about is whether you are on track on your financial variance, and if you aren’t on track, what are you doing to real-time optimize to the changing realities of the budget that are adjusting monthly these days for most CIOs.

This is where we really see an opportunity. They help customers put in place IT financial management solutions, which are not just planning tools -- not just understanding what you have -- but essentially a real-time financial analytic application that is timely and accurate as an enterprise resource planning (ERP) system, or a business intelligence (BI) system that's supporting the company’s business process.

Gardner: If we have a ratio in many organizations where we have 70 percent roughly for maintenance and support and 30 percent for innovation, we're going to need to take from Peter to pay Paul here.

What is it that you can do on that previously "untouchable" portion of the budget? How can we free up capacity in the data-center, rather than build a new data center, for example?

It's a cyclical thing

Purohit: The joke I like to tell about the 70:30 ratio is that, unfortunately, we've been talking about that same ratio for about 10 years. So, somebody is not doing something right. But, the realty is that it's a cyclical thing. Today’s innovation is tomorrow's maintenance.

It's important to realize that there are cycles where you want to move the needle and there are cycles where you can't. Right now, we are in a cycle where every CIO needs to be moving that 70:30 to 30:70. That's because, first of all, they'll be under cost pressure. I really believe that the leaders of tomorrow in the business world are going to be created during the downturn. That's historically what we’ve seen. McKinsey has some good write-ups about that.

It means that you need to be driving as much innovation as possible and getting to that 70 percent. Now, in terms of how you do that, it's making sure that the capital spend that you have, that everything in the data center you have, is supporting a top business priority. It's the most important thing you can do.

One thing that won't change is that demand from the business will all of a sudden strip your supply of capital and labor. What you can do is make sure that every person you have, every piece of equipment you have, every decision you are making, is in the context of something that is supporting an immediate business need or a key element of business operation.

When we work with a lot of customers, we help them do that assessment, and I'll give you one example. A utility company I worked with was able to identify up to $37 million of operational and capital cost savings in the first couple of years just by limiting stuff that wasn't critical to the business.

It also means there are more things and more new things to manage.



There are lots of opportunities to be disciplined in assessing your organization, both in how you spend capital, how you use your capital, and what your people are working on. I wouldn't call it waste, but I would call it just a better discipline and whether what you're doing truly is business critical or not.

Gardner: I suppose having that financial visibility and transparency that allows that triage to take place, and then moving towards this flip of 70:30 ratio, we have to involve people and process and not just technology, right?

Purohit: That's right. If you don't get the people and process right, then new technologies, like virtualization or blade systems, are just going to cause more headaches downstream, because those things are fantastic ways of saving capital today. Those are the latest and greatest technologies. Four or five years ago, it was Linux and Windows Server.

It also means there are more things and more new things to manage. If you don't have extremely disciplined processes that are automated, and if you don't have all of your team with one play book on what those processes are, and making sure that there is a collaborative way for them to work on those processes, and which is as automated as possible, your operating costs are just going to increase as you embrace the new technologies that lower your capital. You've got to do both at the same time.

Gardner: Now, HP Software Solutions has been describing this as operational advantage, and that certainly sounds like you're taking into consideration all the people and process, as well as the technology. Tell me a little more about what you’ve been doing in the past several months and how this will impact the market in 2010.

Best in class

Purohit: We talk about operational advantage, we talk first of all about getting close to a best-in-class benchmark of your IT costs as a percentage of your company’s revenue.

They say close to best, because you never want to race to the bottom and be the lowest-cost provider, if you want to be strategic. But, you'd better be close. Otherwise, your CFO is going to be breathing down your neck with lots of management consultants asking why you are not there.

The way you get there is through a couple of key steps that we have been recommending. First and foremost, you have to standardize and automate as much as you can.

The great news is that, right now, there is really sophisticated technology that we have. Many companies have to apply this to this problem, where you can take a lot of the stuff that you know how to do every day and that involves a lot of people, and eventually a lot of manual work that could be done incorrectly, if you are not careful.

Standardize and automate them to make sure they get done in a very efficient way, in the cheapest possible way, and in the same way every time. We've seen customers take $10 million of operating cost out in 6 to 9 months just by automating what they know to do and what they know they need to do repeatably every time.

We’ve done a ton of work to roll out and automate those best practices, and how to get the advantages of faster innovation using Agile development, without creating a bunch of risks as you move faster.



The second thing that we really work on with people is getting that financial visibility, and getting all of their financial information on labor, projects, capital, and plans in one place, with one data model, so that they have a coherent way to plan and optimize their spends.

Those two things are huge levers. The third thing that we've really started to work with people on is all of these innovation projects, which are really brand new innovative techniques, like Agile development?

How do you make that labor tool extremely effective using that new technique like Agile development? We’ve done a ton of work to roll out and automate those best practices, and how to get the advantages of faster innovation using Agile development, without creating a bunch of risks as you move faster. Those are three really fundamental elements of what we're doing right now.

Gardner: I suppose that when you are picking on something quite as complex as this, you need to have some goal and some vision and direction about what are the realistic goals for some of these cost optimization activities. You mentioned the percent of revenue for IT spend as one gauge. What sort of results do you think people can meaningfully and realistically get in terms of some of these larger metrics?

Important goal

Purohit: We've seen the best companies actually implement this swap from 30:70 to 70:30. So, getting to 30 percent of your spend on operating costs in this cycle, where you need to be investing for the future, is absolutely an achievable and important goal. The second thing is to make sure that you’re benchmarking yourself on this cost of IT versus revenue on the most important competitors in your industry.

The reason that I phrased it that way is that it's not a general benchmark and it's not just the lowest-cost provider in your geography or industry, but you want to know what your most important competitor is doing, using technology as an advantage for both cost structure and innovation.

You want to understand that, spending probably something similar to that, and then hopefully be smarter than them in how you implement that strategy. Those are two really important things.

The third thing is that, depending where your IT organizational maturity is, there are opportunities to take out as much as 5 to 10 percent of your operating cost just by being more disciplined.

Say that you're a new CIO coming to organization and you see a lack of standardization, a lack of centers of excellence, and a lot of growth through merger and acquisition, there is a ton of opportunity to take out operating cost.

We've seen customers generally take out 5 to 10 percent, when a new CIO comes on board, rationalizes everything that's being done, and introduces rigorous standardization. That's a quick win, but it's really there for companies that have been probably a little earlier in the maturity cycle of how they run IT.

The same thing is happening now that happened in 2001, when we had our last major downturn. In 2001, we saw a rise of outsourcing and off-shoring, particularly to places like India.



Gardner: Another way of reducing this percentage of total revenue, I have to imagine, from all the interest in cloud computing these days, comes from examining and leveraging, when appropriate, a variety of different new sourcing options, both, new and old. How does that relate to this cost optimization equation?

Purohit: That's a great point. The same thing is happening now that happened in 2001, when we had our last major downturn. In 2001, we saw a rise of outsourcing and offshoring, particularly to places like India.

That really helped companies to lower their cost structure of their labor dramatically and really assess whether they needed to be doing some of these things in-house. So, that clearly remains as an option. In fact, most companies have figured out how to do that already. Everybody has a global organization that moves the right labor to the right cost structure.

A couple of new things that are possible now with the outsourcing model and the cloud model -- whether you want to call it cloud or software as a service (SaaS) -- is that there's an incredibly rich marketplace of boutique service shops and boutique technology providers that can provide you either knowledge or technology services on-demand for a particular part of your IT organization.

That could be a particular application or a business process. It could be a particular pool of knowledge in running your desktop environment. There's really an incredible range of options out there.

Questions for the CIO

What every CIO needs to be doing is standing back and saying, "What do we really need to be the best at, and where is critical intellectual property that we have to own?" If you're not running at the best possible cost structure for that particular application or business process or you're not operating this infrastructure at the best possible cost structure, then why don't we give it to somebody else who can do a better job?

The cost structures associated with running infrastructure as a service (IaaS) are so dramatically lower and are very compelling, so if you can find a trusted provider for that, cloud computing allows you to move at least markets that are lower risk to experiment with those kind of new techniques.

The other nice thing we like about cloud computing is that there is at least a perception that is going to be pretty nimble, which means that you'll be able to move services in and out of your firewall, depending on where the need is, or how much demand you have.

It will give you a little bit of agility to respond to the changing needs of the business without having to go through a long capital-procurement cycle. The only thing I would say about cloud is be cautious, because it's still early, and we're seeing a lot of experimentation.

The most important thing is to pick cloud providers that you can trust, and make sure that your line of business people and people in your organization, when they do experiment, are still putting in the right governance approach to make sure that what's going out there is something that doesn’t introduce extra risk to your business.

There’s a lot of diligence that needs to be put in place, so that cloud becomes less an experiment and more a critical element of how you can address this cost-structure issue.



Trust your provider, if you are putting data out there in the cloud. Do you trust how that data is being handled? If that cloud infrastructure is part of a business critical service, how are you measuring it to make sure that it's actually supporting the performance availability security needs of what the business needs?

There’s a lot of diligence that needs to be put in place, so that cloud becomes less an experiment and more a critical element of how you can address this cost-structure issue.

Gardner: Now, when we talk about cost structure, I would think that's even more critical for these cloud providers in order for them to pass along the savings. They themselves must put into place many of the things we have talked about today.

Purohit: That's right. Cloud providers have to push the needle right to the edge in order to compete. They're using the best possible new technology around blade computing, virtualization, automating everything, new service oriented architecture (SOA) technologies, so that you can do small component applications and stitch them together super fast.

The right governance

That's the value that they're providing. Then, the challenge is that you've got to make sure that not only do they have the great innovation, and great cost structure, but you trust what they are doing and that they have the right governance around it. I think that's really going to be what separates the lowest-cost cloud providers from the ones that you want to bet your business on.

Gardner: Is there anything else you want to offer in terms of thinking about cost optimization and how to get through the next year or two, where we are flipping ratios but are also maintaining lower total cost?

Purohit: I want to go back to this innovation bucket, because, as I said, you don't want to come out of this cycle as a CIO who was associated only with lowering cost and didn't fundamentally move the needle out, making the business more competitive.

You have limited ability to make those bets. So, the best bets are ones that are very prevalent, very top of mind for the business executives who really change the dynamic in terms of competitiveness, sales productivity, or the way they engage their customers.

The most consistent project that we have seen, the kind of project we see out there that are good bets for those innovation dollars, is around a theme you call application modernization.



The most consistent project that we have seen, the kind of project we see out there that are good bets for those innovation dollars, is around a theme you call application modernization.

What's happening right now in the industry is what we believe is the biggest revolution in application technology of the enterprise in probably 10 years. That's a composite of things that you build applications with these new Agile development methods.

All of these rich Internet protocols are revolutionizing the way you visualize and interact with applications, crossing over from the consumer world into the enterprise world. A whole, new wave of application platform technology is being introduced by SAP, Oracle, and Microsoft. And, SOA is becoming very real, so that you can actually integrate these applications very quickly.

Our view is that the companies who use this opportunity to modernize their applications and have this rich interactive visual experience, where they can nimbly integrate various application components to innovate and to interact with their customers or their sales people better, are the ones that are going to emerge from this downturn as the most successful leveraging technology to win in the marketplace.

We really encourage customers to take a very hard look at application modernization, and are helping them get there with those scarce innovation dollars that they have.

Gardner: Very good. We've been discussing the need for implementing best methods and achieving higher cost optimization by looking at reverse ratios from maintenance and support to innovation and transformation. Helping us along our journey in our discussion, we've been joined by Robin Purohit, the General Manager and Vice President for HP Software Solutions. Thanks so much, Robin.

Purohit: Thanks, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP's Robin Purohit on the challenges that CIOs face in the current economic downturn and how to prepare their businesses for recovery. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

You may also be interested in:

Friday, June 26, 2009

IT Financial Management Provides Required Visibility into Operations to Reduce Total IT Costs

Transcript of a BriefingsDirect podcast on how IT departments should look deeply in the mirror to determine and measure their costs and how they bring value to the enterprise.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on bringing improved financial management capabilities to enterprise IT departments. The global economic downturn has accelerated the need to reduce total IT cost through identification and elimination of wasteful operations and practices. At the same time, IT departments need to better define and implement streamlined processes for operations and also for proving how new projects begin and unfold.

Knowing the true cost and benefits of complex and often sprawling IT portfolios quickly helps improve the financial performance of how to quantify IT operations. Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste.

Here to help us understand the relationship between IT, financial management, and doing more for less in tough times are two executives from Hewlett-Packard (HP). Please help me welcome Ken Cheney, director of product marketing for IT Financial Management at HP Software and Solutions. Welcome, Ken.

Ken Cheney: Thanks, Dana, I appreciate the opportunity.

Gardner: We’re also joined by John Wills. He’s a practice leader for the Business Intelligence Solutions Group at HP Software and Solutions. Welcome, John.

John Wills: Hi, thank you, Dana.

Gardner: Ken, let’s start with you. We’ve heard for quite sometime that IT needs to run itself more like a business, to do more with less, and provide better visibility for the bean counters. But, now that we’re in a tough economic climate, this is perhaps a more pressing concern. Give me a sense of what’s different about running IT as an organization and a business now versus two years ago.

Cheney: Dana, the economy has definitely changed the game in terms of how IT executives are operating. I and the others within HP are hearing consistently from IT executives that cost-optimization, cost-containment, and cost-reduction initiatives are the top priority being driven from the business down to IT.

IT organizations, as such, have really shifted the focus from one of a lot of new initiatives that are driving innovation to one of dealing with situations such as how to manage merger and acquisition (M&A) processes, how to deleverage existing IT assets, and how to provide better decision-making capabilities in order to effectively control cost.

The landscape has changed in such a way that IT executives are being asked to be much more accountable about how they’re operating their business to drive down the cost of IT significantly. As such, they're having to put in place new processes and tools in order to effectively make those types of decisions.

Gardner: Now, John, tell me about the need for better visibility. It seems that you can’t accomplish what Ken's describing, if you don’t know what you have.

Wills: Right, Dana. That’s absolutely correct. If all of your information

Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost . . .

is scattered around the IT organization and IT functions, and it’s difficult to get your arms around, then you’re exactly right. You certainly can’t do a good job managing going forward.

A lot of that has to do with being able to look back and to have historical data. Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost and where you can optimize cost or take cost down and where you have risk in the organization. So, visibility is absolutely the key.

Gardner: It’s almost ironic that the IT department has been helping other elements of the enterprise do the exact same thing -- to have a better sense of their data and backwards visibility into process and trends. Business intelligence (BI) was something that IT has taken to the business and now has to take back to itself.

Wills: It is ironic, because IT has spent probably the last 15 years taking tools and technologies out into the lines of business, helping people integrate their data, helping lines of business integrate their data, and answering business questions to help optimize, to capture more customers, reduce churn in certain industries, and to optimize cost. Now, it’s time for them to look inward and do that for themselves.

Gardner: When we start to take that inward look, I suppose it’s rather daunting. Ken, tell us a little bit about how one gets started. What is the problem that you need to address in order to start getting this visibility that can then provide the analytics and allow for a better approach to cost containment?

From managed to siloed

Cheney: If you look at the situation IT is in, businesses actually had better management systems in place in the 1980s than the management systems in place today. The visibility and control across the investment lifecycle were there for the business in the 1980s with the likes of enterprise resource planning (ERP) and corporate performance management capabilities. Today, IT operates in a very siloed manner, where the organization does not have a holistic view across all the activities.

In terms of the processes that it’s driving in a consistent manner, they’re often ad hoc. The reporting methods are growing up through these silos and, as such, the data tends to be worked within a manual process and tends to be error-prone. There's a tremendous amount of latency there.

The challenge for IT is how to develop a common set of processes that are driving data in a consistent manner that allows for effective control over the execution of the work going on in IT as well as the decision control, meaning the right kind of information that the executives can take action on.

Gardner: John, in getting to understand what’s going on across these silos in IT, is this a problem that’s about technology, process, people, or all three? What is the stumbling block to automating some of that?

Wills: That’s a great question. It’s really a combination of the three. Just to be a little bit more specific, when you look at any IT organization, you really see a lot of the cost is around people and around labor. But, then there is a set of physical assets -- servers, routers, all the physical assets that's involved in what IT does for the business. There is a financial component that cuts across both of those two major areas of spend.

As Ken said, when you look back in time and see how IT has been maturing as an organization and as a business, you have a functional part of the organization that manages the physical assets, a functional part that manages the people, manages the projects, and manages the operation. Each one of those has been maturing its capability operationally in terms of capturing their data over time.

Industry standards like the Information Technology Infrastructure Library (ITIL)

IT organizations are going to be starting in multiple places to address this problem. The industry is in a good position to address this problem.

have been driving IT organizations to mature. They have an opportunity, as they mature, to take advantage and take it to the next level of extracting that information, and then synthesizing it to make it more useful to drive and manage IT on an ongoing basis.

Gardner: Ken, you can’t just address new technology at this juncture. You can’t just say, "We’re going to change our processes." You can’t just start ripping people out. So, how do you approach this methodologically? Where do you start on all three?

Cheney: IT organizations are going to be starting in multiple places to address this problem. The industry is in a good position to address this problem. Number one, as John mentioned, process standardization has occurred. Organizations are adopting standards like ITIL to help improve the processes. Number two, the technology has actually matured to the point where it’s there for IT organizations to deploy and get the type of financial information they need.

We can automate processes. We can drive the data that they need for effective decision-making. Then, there is also the will there in terms of the pressure to better control cost. IT spend these days composes about 2 to 12 percent of most organizations’ total revenue, a sizable component.

Gardner: I suppose there also has to be a change in thinking here at a certain level. If you're going to start to finding cost and behaving like a business rather than a cost center, you have to make a rationale for each expenditure, both operationally and on a capital expenditure basis. That requires a cultural shift. Can you get into that a little bit?

Speaking the language of business

Cheney: It sure does. IT traditionally has done a very good job communicating with the business in the language of IT. It can tell the business how much a server costs or how much a particular desktop costs. But it has a very difficult time putting the cost of IT in the language of the business -- being able to explain to the business the cost of a particular service that the business unit is consuming.

For example, quote to cash. How much is a particular line of business spending on quote to cash or how much does email cost based on actual usage per employee. These are some of the questions that the business would love to know, because they're trying to drive business initiatives, and these days, the business IT initiative is really part of a business initiative.

In order to effectively asses the value of a particular business initiative, it’s important to know the actual cost of that particular initiative or process that they are supporting. IT needs to step up in order for you to be able to provide that information, so that the business as a whole can make better investment decisions.

Gardner: I suppose business services are increasingly becoming the coin of the realm and defining things such as a processor, number of cores, license per user/seat, and that sort of thing doesn’t really translate into what a business service costs. John, how does BI come onto the scene here and help gather all of the different aspects of the service so that it could be accounted for?

Wills: It all ties together very strongly. Listening to what Ken was saying

That’s one of the keys in helping IT shift from just being a cost center to being an innovator to help drive the business.

about tying to the investment options and providing that visibility ties directly to what you are asking about BI. One of the things that BI can help with at this point is to identify the gaps in the data that’s being captured at an operational level and then tie that to the business decision that you want to make.

So again, Dana, back to one of your earlier questions about whether it's a people, process or technology issue, my answer would be that it's really all of the above. BI comes along and says, "Well, gee, maybe you’re not capturing enough detailed information about business justification on future projects, on future maintenance activity, or on asset acquisition or the depreciation of assets."

BI is going to help you collect that and then aggregate that into the answers to the central question that a CIO or senior IT management may ask. As Ken said, it’s very important that BI, at the end of that chain or sequence of activities, helps communicate that back in terms that business can understand so they can do an apples-to-apples comparison of where they would like IT to satisfy their needs with a given budget at hand. Dana, that goes back to again one of your earlier questions. That’s one of the keys in helping IT shift from just being a cost center to being an innovator to help drive the business.

Gardner: I suppose that as we move from manual processes in IT toward these visualization analytical tools, there's also a cultural shift. A print out might work well in the IT department. If you take that to your decision maker on the business side, they're going to say, "Where's my dashboard? Where are my dials?" What’s the opportunity here to make this into more of a visual benefit in terms of understanding what’s going on in IT and in cost? Why don’t we take that to Ken?

Getting IT's house in order

Cheney: In terms of the opportunity, it’s really around helping IT get its own house in order. We look at the opportunity as being one of helping IT organizations put in place the processes in such a way that they are leveraging best practices, that they're leveraging the automation capabilities that we can bring to the table to make those processes repeatable and reliable, and that we can drive good solid data for good decision-making. At least, that’s the hope.

By doing so, IT organizations will, in effect, cut through a lot of the silo mentality, the manual error-prone processes, and they'll begin operating much more as a business that will get actionable cost information. They can directly look at how they can contribute better to driving better business outcomes. So, the end goal is to provide that capability to let IT partner better with the business.

Gardner: Tell me a little bit more about the solutions and services that you'll be announcing at HP’s Software Universe event?

Cheney: At Software Universe this year, we rolled out a new solution to help customers with IT financial management. For quite some time, we’ve been in the business of doing project portfolio management (PPM) with HP Project Portfolio Management Center, as well as in the business of helping organizations better manage their IT assets with HP Asset Manager.

We have large customer bases that are leveraging those tools. With our

This product effectively allows IT organizations to consolidate their budgets, as well as their costs. It can allocate the cost as appropriate to who they view as actually consuming those particular services that IT is delivering.

customers who are using the PPM product as well as the Asset Management product from HP we can effectively capture the labor cost. If you look at PPM, it really tracks what people are working on, effectively managing the resources and capturing the time and cost associated with what those resources are doing, as well as the non-labor. It's all of the assets out there -- physical, logical, and virtual. We're talking about servers and software so that we can pull that together to understand the total cost of ownership.

We’ve brought together what we’re doing within PPM as well as within Asset Management with a new product called HP Financial Planning and Analysis. This product effectively allows IT organizations to consolidate their budgets, as well as their costs. It can allocate the cost as appropriate to who they view as actually consuming those particular services that IT is delivering.

Then, we provide the analytic reporting capabilities on top of that to allow IT organizations to better communicate with the business, better control, optimize, make decision making around cost. They can effectively drive the decision making right down to the execution of the work that’s occurring within the various processes of IT. That’s a big part of what we are delivering with our IT financial management capability.

Gardner: So, tell us about the products and solutions that are coming into the market?

Cheney: We have a new solution that we’re announcing as part of the HP Financial Planning and Analysis offerings.

Gardner: Does that have several modules, or are there certain elements to it -- or more details on how that is rolling out?

Service-based perspective

Cheney: The HP Financial Planning Analysis product allows organizations to understand costs from a service-based perspective. We're providing a common extract transform load (ETL) capability, so that we can pull information from data sources. We can pull from our PPM product, our asset management product, but we also understand the customers are going to have other data sources out there.

They may have other PPM products they’ve deployed. They may have ERP tools that they're using. They may have Excel spreadsheets that they need to pull information from. We'll use the ETL capabilities to pull that information into a common data warehouse where we can then go through this process of allocating cost and doing the analytics.

Gardner: So, John, going back to that BI comparison. It sounds a lot like what people have been doing with trying to get single view of a customer in terms of application data.

Wills: It really is. It’s a single view, except in this case we're getting single view of cost across many different dimensions. It’s really important, as Ken said, that we really want to formalize the way they're bringing cost data in from all of these Excel spreadsheets and Access databases that sit under somebody’s desk. Somebody keeps the monthly numbers in their own spreadsheets in a different department and they are spread around in all of these different systems. We really want to formalize that.

As to your previous question about visualization, it’s not only about formalizing it and pulling it altogether. It’s about offering very powerful visualization tools to be able to get more value and to be able to see immediately where you could take advantage of cost opportunities in the organization.

Part of Financial Planning and Analysis is Cost Explorer, a very traditional BI capability in terms of visualizing data that’s applied to IT cost, while you search through the data and look at it from many different dimensions, color coding, looking at variants, and having this information pop out of you.

Gardner: It’s one thing to be able to gather, visualize, and put all this

. . . once you are able to actively price your services, you're able to charge back for the consumption of those services.

information in the context of a cost item or a service, but the larger payback comes from actually being able to associate that cost with the user or some organization that’s consuming these services at some level or another. How do we get from the position of visibility and analytics to a chargeback mechanism?

Cheney: Most customers that I talk to these days are very keen on jumping immediately to the charge back and value side of the equation. I like to say, "Let’s start by walking before we run," with the full understanding that the end goal really is being able to show the value that IT is delivering and be able to charge back for the services that are actually being consumed.

Most organizations haven’t even put in place these processes that they need, which is why, when we talk about what we are doing with IT financial management, we want to make sure customers understand that it’s a complete solution, where we view the underlying processes being the end goal of understanding of the value doing charge back. To get to that nirvana of understanding the value of IT, customers need to put in place those processes around capturing labor cost and non-labor assets and effectively managing the IT investment lifecycle end-to-end.

On top of that, by doing the cost aggregation and the analytics that we are doing with the Financial Planning and Analysis offering, you get the cost visibility. Once you understand the cost, you can then go through the process of pricing out what your services are. At that point, once you are able to actively price your services, you're able to charge back for the consumption of those services.

IT underappreciated

Gardner: Over the years, I've heard from a number of IT folks about their frustration at not being appreciated. People don’t understand what goes into being able to provide these services. This perhaps opens up an opportunity or a door for the IT department to explain itself better and perhaps be better appreciated. Does that bear fruit in some of the uses that you’ve come across so far?

Cheney: Absolutely. That is really what we are driving for -- to help IT organizations be much more credible in front of the business, for business to understand what it is that they are actually paying for, and for IT to react much more nimbly to the requests that are coming in from the business.

Wills: You are certainly being more transparent. Put the question of charge back to the side for a moment. Without question, you're able to be more transparent in what the costs are. You're able to use the same terminology, very consistent terminology, that the business understands, which is a huge leap forward for most organizations. When you have that transparency, when you have a common set of terminology in the way that you communicate things, it’s a huge boost for IT to be able to justify how they are spending their budget money.

Gardner: Let me ask an interesting question. Who in IT is responsible for this? Is there a "chief visibility officer," if you will, within the IT department? Who is generally the sign-off on the purchase order for these sorts of products?

Wills: The chief sign-off officer, the chief visibility officer, is the CIO. There is no question. The CIO is the one. It’s really interesting. When we talk to accounts, the one who has the burning issues, the one who is most often in front of the business, justifying what IT does, is the CIO --at the highest level obviously.

That's always interesting, because the CIO has the most immediate pain. Often times, people one or two levels beneath him are grinding through manually pulling data together month after month and sending that data upstairs, so to speak. They don’t have the same levels of interaction with the end customers to have that acute pain, but the CIO is definitely the chief one who sees that on a daily basis. Would you agree, Ken?

Cheney: I would. Many CIOs have created essentially an IT finance arm and they may have a role, such as a CFO for IT or IT finance, which is taking all that information rolling up from those folks that are lower down in the organization and trying to make sense of it. This is a manual, very error-prone process these days. So, for that particular organization charged with making sense of IT finances and associating the actual cost of IT to the services consumed it is a big challenge. As such, it makes the job of the CIO very difficult when it comes to going out and communicating with the business.

Gardner: Let’s see if we can quickly identify some examples. Do you

. . . we found that, on average, that strategic portfolio work that our customers would do could save 2 to15 percent of their total IT budget.

have case studies or customers that you can describe for us who have undertaken some of this, and what have they found? Did they get actual significant savings? Did they see an improvement in their trust and sense of validation before the business or perhaps are they looking more for efficiency and are improving the productivity of their IT departments? Any metrics of success for how these products are being used?

Cheney: In terms of how customers are being successful, we’ve seen customers these days who are very focused on quick results, meaning that when they deploy what we are bringing to the table, they are doing it in a very targeted manner. We recommend that customers actually do that. That they want to make sure they’re tackling this problem in what I call bite-size chunks, where they want to get a win within a few months maximum. We have customers who will start, for example, with a base level of control over their IT processes.

One great starting point would be strategic portfolio management. We recently did a survey of about 200 IT executives and found that 43 percent of those executives said they have no form of portfolio rigor in place today. We did a benchmark study with a group of our customers with an organization called the Gantry Group, a third-party group that does return on investment (ROI) analysis and we found that, on average, that strategic portfolio work that our customers would do could save 2 to15 percent of their total IT budget. That's an area where we can have a very quick impactful win, and it's a good example.

Another area would be asset, inventory, and utilization, where we have customers who will get started just understanding what they have out there in terms of their services and desktop software and getting a grip on that. There are immediate savings to be had with that type of initiative as well.

A look at the future

Gardner: That brings up looking at the future. We've heard a lot about virtual desktop infrastructure (VDI), bringing a lot of what was done locally back to the data center, but with cost issues being top of mind around that. Then, we're also hearing quite a bit about cloud computing. It seems to me that we're going to have to start doing some really serious cost benefit analysis about what is the cost to maintain my current client distribution architecture versus going to a VDI or a desktop-as-a-service (DaaS) approach.

I am also going to need to start comparing and contrasting cloud-based services, applications and/or infrastructure against what it costs and what we're doing internally. Do you see some trends in the field, some future outlook, in terms of what the role of IT is going to move into in terms of being able to do these financial justifications?

Cheney: Absolutely. This is an area that we're seeing customers having to grapple with on a consistent basis, and it’s all about making effective sourcing decisions. In many respects, cloud computing, software as a service (SaaS), and virtualization all present great opportunities to effectively leverage capital. IT organizations really need to look at it through the lens of what the intended business objectives are and how they can best leverage the capital that they have available to invest.

Gardner: John, something further to offer?

Wills: There is a huge inflection point right now. Virtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost. The world, going forward, obviously doesn’t get simpler. It only gets more complex. IT organizations are really looked at for using capital wisely. They're really looked at as the decision makers for where to allocate that capital, and some of it’s going to be outside the four walls.

We've seen that on the people side of the business with outsourcing for quite some time. Now, it’s happening with the hardware and software side of the business as well. But these decisions are very strategic for the enterprise overall. The percentage of spend, the IT spend-to-revenue, for a lot of these organizations is very large. So, it’s absolutely critical for the enterprise that they get their hands around a process for capturing cost and analyzing cost, if they're going to be able to adapt and evolve as this market continues to change so rapidly.

Gardner: If an outside provider can walk in and say this application or this infrastructure is going to cost this much per employee per month, that’s pretty concrete. If the business decision maker goes back to the IT department and says, "How much is that going to cost from your perspective," they have got to have an answer, right?

Wills: Right. You'd better have an answer for what your fully loaded costs are across every dimension of your business and you'd better understand things like your direct cost, indirect cost, fixed cost, and variable cost. It’s really about looking into the future and predicting not only risk, but opportunity, advising the board and CEO, and saying, "These are our choices and this is the best use of capital."

Gardner: I was just having a chat about cloud computing with Frank Gillett of Forrester Research a few weeks ago. He was saying that when you take a hard look at these costs, in many cases, doing it on-premises internally is actually quite a bit more attractive than going to the cloud but you have to come up with the numbers to actually justify that.

Wills: You have to be able to justify it. There is also the dimension of

You'd better have an answer for what your fully loaded costs are across every dimension of your business and you'd better understand things like your direct cost, indirect cost, fixed cost, and variable cost.

customer satisfaction. You talk about service-level agreements (SLA), and you must factor that in. So, you start to get into some of the soft aspects of costing things out and looking at opportunity cost, but you have to factor those in as well. It does show some of the complexity here with the problem that’s at hand.

We really believe this is the time for the organizations to seriously get their arms around this.

Gardner: Well, I'm afraid we are about out of time but we have been learning more about how improved financial management capabilities can help reduce total IT cost through identification and elimination of wasteful operation, but also gaining visibility into actual IT cost structures can certainly help our organizations justify themselves to the business, find the right balance in budgets and future projects, and, as we have seen for the future, being in a good position to compare and contrast against virtualization, and cloud computing, and some of the other new opportunities for IT acquisition.

I want to thank our panel for getting deeply into this conversation. It’s really been fun. I also want to thank our sponsor for today’s discussion, HP Software and Solutions, for underwriting its production. We've been joined by Ken Cheney, director of product marketing for IT Financial Management at HP software. Thanks, Ken.

Cheney: Great. Thank you.

Gardner: And also, John Wills, practice Leader for the Business Intelligence Solutions Group at HP software. I appreciate your input, John.

Wills: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how IT departments should look deeply in the mirror to determine and measure their costs and how they bring value to the enterprise. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Sunday, March 29, 2009

HP Advises Strategic View of Virtualization So Enterprises Can Dramatically Cut Costs, Gain Efficiency and Usher in Cloud Benefits

Transcript of a BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the business case and economic rationale for virtualization implementation and best practices.

****Access More HP Resources on Virtualization****

Virtualization has become more attractive to enterprises as they seek to better manage their resources, cut total costs, reduce energy consumption, and improve the agility of their data centers and IT operations. But, virtualization is more than just installing hypervisors. The effects and impacts of virtualization cut across many aspects of IT operations, and the complexity of managing virtualization IT runtime environments can easily slip out of control.

In this podcast, we're going to examine how virtualization can be applied as a larger process and a managed IT undertaking with sufficient tools for governance that allow for rapid, but reasoned, virtualization adoption. We'll show how the proper level of planning and management can more directly assure a substantive economic return on the investments enterprises are making through virtualization.

The goal is to do virtualization right and to be able to scale the use of virtualization in terms of numbers of instances. We also want to extend virtualization from hardware to infrastructure, data, and application support, all with security, control, visibility, and lower risk, and while also helping to make the financial rationale ironclad.

To help provide an in-depth look at how virtualization best practices make for the best economic outcome we're joined by Bob Meyer, the worldwide virtualization lead in Hewlett-Packard’s (HP) Technology Solutions Group (TSG). Welcome to the show, Bob.

Bob Meyer: Thank you very much, Dana.

Gardner: Virtualization is really becoming quite prominent, and we're even seeing instances now where the tough economic climate is accelerating the use and adoption of virtualization. This, of course, presents a number of challenges.

First, could you provide some insight, from HP’s perspective, of how you see virtualization being used in the market now, and how that perhaps has shifted over the past six months or so?

Meyer: When we talk about virtualization -- obviously it’s been around for quite a long time -- it's typically the virtualization of Windows Servers where people start to think about it. For a couple of years now, that’s been the hot value proposition within IT.

The allure there is that when you consider the percentage of budget spent on data center facilities, hardware, and IT operations management, virtualization can have a profound effect on all of these areas.

Moving off the fence

For the last couple of years, people have realized the value in terms of how it can help consolidate servers or how it can help do such things as backup and recovery faster. But, now with the economy taking a turn for the worse, anyone who was on the fence, who wasn’t sure, who didn’t have a lot of experience with it, is now rushing headlong into virtualization. They realize that it touches so many areas of their budget, it just seems to be a logical thing to do in order for them to survive these economic times and come out a leaner, more efficient IT organization.

The change that we see is that previously virtualization was for very targeted use and now it’s gone to virtualization everywhere, for everything -- "How much can I put in and how fast can I put it in."

Gardner: When you move from a tactical orientation to exploit virtualization at this more strategic level, that requires different planning and different methodologies. Tell us what that sort of shift should mean.

Meyer: To be clear, we're not just talking about virtualization of servers. We're talking about virtualizing your infrastructure -- servers, storage, network, and even clients on the desktop. People talk about going headlong into virtualization. It has the potential to change everything within IT and the way IT provides services.

The potential is that you can move your infrastructure around much faster. You can provision a new server in minutes, as opposed to a few days. You can move a virtual machine (VM) from one server to another much faster than you could before.

When you move that into a production environment, if you're talking about it from a services context, a server usually has storage attached to it. It has an IP address, and just because you can move the server around faster doesn’t mean that the IP address gets provisioned any faster or the storage gets attached any faster.

So, when you start moving headlong into virtualization in a production environment, you have to realize that now these are part of services. The business can be affected negatively, if the virtualized infrastructure is managed incompletely or managed outside the norms that you have set up for best practices.

Gardner: I guess it also makes sense that the traditional IT systems-management approaches also need to adjust. If you had standalone stacks, each application with its own underlying platform, physical server, and directly attached data and little bits of middleware for integration, you had a certain setup for managing that. What’s different about managing the virtualized environments, as you are describing them?

Meyer: There are a couple of challenges. First of all, one of the blessings of virtualization is its speed. That’s also a curse in this case, because in traditional IT environments, you set up things like a change advisory board and, if you did a change to a server, if you moved it, if you had to move to a new network segment, or if you had to change storage, you would put it through a change advisory board. There were procedures and processes that people followed and received approvals.

In virtualization, because it’s so easy to move things around and it can be done so quickly, the tendency is for people to say, "Okay, I'm going to ignore that best practice, that governance, and I am going to just do what I do best, which is move the server around quickly and move the storage around." That’s starting to cause all sorts of IT issues.

The other issue is not just the mobility of the infrastructure, but also the visibility of that infrastructure. A lot of the tools that many people have in place today can manage either physical or virtual environments, but not both. What you're heading for when that’s the case is setting up dual management structures. That’s never good for IT. You're just heading for service outages and disruptions when you go in that direction.

Gardner: It sounds like some safeguards are required for managing and allowing automation to do what it does well, but without it spinning out of control and people firing off instances of applications and getting into some significant waste or under-utilization, when in fact that’s what you are trying to avoid.

Shifting the cost

Meyer: Certainly. A lot of what we're seeing is the initial gains of virtualization. People came in and they saw these initial gains in server consolidation. They went from, let’s say, 12 physical boxes down to one physical box with 12 virtual servers. The initial gains get wiped out after a while, and people push the cost from hardware to management, because it becomes harder to manage these dual infrastructures.

Typically, big IT projects get a lot of the visibility. The initial virtualization projects probably get handled with improper procedures. As you come back to day-to-day operations of the virtualized environment, that’s where you start to lose the headway that you gained originally.

That might be from non-optimized infrastructure that is not made to move as fast or to be instrumented as fast as virtualization allows it to be. It could be from management tools that don’t support virtual and physical environments, as we mentioned before. It can even be governance. It can be the will of the IT organization to make sure that they adopt standards that they have in place in this new world of moving and changing environments.

Gardner: For a lot of organizations, with many IT aspects or approaches these days, security and compliance need to be brought into the picture. What does this flexible virtualization capability mean, if you're in a business that has strict compliance and security oversights?

Meyer: Again, it produces its own set of challenges for the reasons similar to what we talked about before. Compliance has many different facets. If you have a service infrastructure that’s in compliance today in a physical environment, it might take days to move that around, and to change the components. People are likely to have much more visibility. That window of change tends to take a lot longer.

With virtualization, because of the speed, the mobility, and the ease of moving things around, things can come out of compliance faster. They could be out of regulatory compliance. They could be out of license compliance, because it’s much easier to spin up new instances of virtual machines and much harder to track them.

So, the same blessing of speed and mobility and ease of instrumentation can take a hit on the compliance and security side as well. It’s harder to keep up with patches. A lot of people do virtual machines through images. They'll create a virtual machine image, and once that image is created, that becomes a static image. You deploy it on one VM and then another and then another. Over time, patches come out, and those patches might not be deployed to that particular image. People are starting to see problems there as well.

Gardner: Just to throw another log on the fire of why this is a complex undertaking, we're probably going to be dealing with hybrid environments, where we have multiple technologies, and multiple types of hypervisors. As you pointed out, the use of virtualization is creeping up beyond servers, through infrastructure storage, and so forth. What’s the hurdle, when it comes to having these mixed and hybrid environments?

Mixed environments are the future

Meyer: That’s a reality that we are going to be dealing with from here on out. Everybody will have a mix of virtual and physical environments. That’s not a technology fad. That’s just a fact. There will be services -- cloud computing, for example -- that will extend that model.

The reality is that the world we live in is both physical and virtual, when it comes to that infrastructure. To have to start looking at it from that perspective, you have to start asking, "Do I have the right solutions in place from an infrastructure perspective, from a management perspective, and from a process perspective to accommodate both environments?"

The danger is having parallel management structures within IT. It does no one any good. If you look at it as a means to an end, which virtualization is, the end of all this is more agile and cost-effective services and more agile and cost-effective use of infrastructure.

Just putting a hypervisor on a machine doesn’t necessarily get you virtualization returns. It allows you to virtualize, but it has to be put on the construct of what you're trying to do. You're trying to provide IT-enabled services for the business at better economies of scale, better agility, and low risk, and that’s the construct that we have to look at.

Gardner: So, if we have a strategic requirement set to prevent some of these blind alleys and pitfalls, then we need to have a strategic process and management overview. This is something that cuts across hardware, software, management, professional conduct and culture, and organization. How do you get started? How do you get to the right level of doing this with that sort of completeness in mind?

Meyer: That’s the problem in a nutshell right there. The way virtualization tends to come in is unique, because it's a revolutionary technology that has the potential to change everything. But, because of the way it comes in, people tend to look at it from a bottom-up perspective. They tend to look at it from, "I have this hypervisor. This hypervisor enables me to do virtual machines. I will manage the hypervisor and the virtual machines, differently than other technologies."

Service-oriented architecture (SOA) and Web services aren't able to creep into an IT environment. They have to come from a top-down perspective. At least somebody has to mandate that they would implement this architecture. So, there's more of a strategy involved.

When we look back at virtualization, the technology is no different than other technologies in the sense that it has to be managed from a strategic perspective. You have to take that top-down look and say, "What does this do for me and for the business?"

At HP, this is where organizations come to us and say, "We have virtualization in our test and development environment, and we are looking to move it into production. What’s the best way to do that?" We come in and assess what they are looking to do, help them roll that up into what’s the bigger picture, what are they trying to get out of this today, and what do they want to get out of this a year from now.

We map out what technologies are in place, how to mix that, how to take the hypervisor environment and make that part of the overall operational management structure, before they move that into the operational environment.

If somebody's already using it and has a number of applications or services they're ready to virtualize, they're already experiencing some of the pain. So, that’s a little bit more prescriptive. Somebody will come in and say, "I'm experiencing this. I'm seeing my management cost rise." Or, "When a service goes down, it’s harder for me to pinpoint where it is, because my infrastructure is more complex."

This is where typically we'll have a spot engagement that then leads to a broader conversation to say, "Let’s fix your pain today, but let’s look at it in the broader context of a service." We have a set of services to do that.

There's a third alternative as well. Sometimes people come to us. They realize the value of virtualization, but they also realize that they don’t have the expertise in house or they don’t have the time to develop that longer-term strategy for themselves. They can also come to HP for outsourcing that virtual and physical environment.

Gardner: It sounds as if the strategic approach to virtualization is similar to what we've encountered in the past, when we've adopted new technologies. We have had to take the same approach of let’s not go just bottom up. Let’s look strategically. Can you offer some examples of how this compares to earlier IT initiatives and how taking that solution approach turned out to be the best cost-benefit approach?

Potential to change everything

Meyer: As an example from an earlier technology perhaps, I always look at client-server computing. When that came out, it had the potential to change everything. If you look at computing today, client-server really did change the way that applications and services were provided.

If you look at the nature of that technology, it required rewriting code and complete architectures. The nature of the technology lent itself to have that strategic view. It was deployed and, over time, a lot of the applications that people were using went to client-server and tier architecture. But, that was because the technology lent itself to that.

Virtualization, in that sense, is not very different. It is a game changer from a top-down perspective. The value you get when you take that top-down perspective is that you have the time to understand that, for example, "I have a set of management tools in place that allow me to monitor my servers, my storage, my network from a service perspective, and they will let me know whether my end users are getting the transaction rates they need on their Web services."

Gardner: Let me just explore that a little bit more. Back when client-server arrived, it wasn’t simply a matter of installing the application on the server and then installing the client on the PCs. Suddenly, there were impacts on the network. Then, there were impacts on the size of the server and capabilities in maintaining simultaneous connections, which required a different approach to the platform.

Then, of course, there was a need for extending this out to branch offices and for wider area networks to be involved. That had a whole other set of issues about performance across the wide area network, the speed of the network, and so on -- a ripple effect. Is that what we're seeing as well with virtualization?

Meyer: We do, absolutely. With the bottom-up approach, people look at it from a hypervisor and a server perspective. But, it really does touch everything that you do, and that everything is not just from a hardware perspective. It not only touches the server itself or the links between the server, the storage, and the network, but it also touches the management infrastructure and the client infrastructure.

So, even though it’s easier to deploy and it can seep in, it touches just about everything. That’s why we keep coming back to this notion of saying that you need to take a strategic look at it, because the more you deploy, the more it will have that ripple effect, as you call it, on all the other systems within IT, and not just a server and hypervisor.

Gardner: Tell us about HP’s history with virtualization. How long has HP been involved with it, and what’s its place and role in the market right now?

Meyer: HP has been doing virtualization for a long time. When most people think of virtualization, they tend to think of hypervisors and they tend to think of it on x86 or Windows servers. That’s really where it has caused this to become popular. But HP has had virtualization in it for quite a while, and we've been doing virtualization on networks for quite a while. So, we are not newcomers to the game.

When it comes to where we play today, there are companies that are experts on the x86 world, and they're providing hypervisors. VMware, Citrix, and Microsoft are really good at what they do. HP doesn’t intend to do that.

Well-managed infrastructure

What we intend to do is take that hypervisor and make sure that it's part of a well-managed infrastructure, a well-managed service, well-managed desktops, and bringing virtualization into the IT ecosystem, making it part of your day-to-day management fabric.

That’s what we do with hardware that’s optimized out of the box for virtualization. You can wire your hardware once and, as you move your virtual components around, the hardware can take care of the rewiring, the IP network, the IP address, and the storage.

We handle that with IT operations and management offerings that have one solution to heterogeneously manage virtual and physical environments. We do that with client architecture, so that you can extend virtualization onto the desktops, secure the desktops, and take a lot of the cost out of managing them. If you look at what HP is about, it’s taking that hypervisor and actually delivering business value out of a virtual environment.

Gardner: Of course, HP is also in the server hardware business. Does that provide you a benefit in virtualization? Some conventional thinking might be, well gee, why would the hardware people want to increase utilization? Aren’t they in the business of selling more standalone servers?

Meyer: Certainly, we're in the business of selling hardware as well, but the benefit comes in many different areas. Actually, more people today are running virtualization on HP servers than any other platform out there. So, virtualization is an area that allows us to be more creative and more innovative in a server environment.

One of the hottest areas right now in server growth is in blade servers, where you have a bladed enclosure that’s made specifically for virtualization. It allows you to lower the cost of power and cooling, lower the floor space of the data center, and move your virtual components around much faster. Where we might see utilization rates decline in some areas, we're certainly seeing the uptake in others. So, it’s certainly an opportunity for us.

Gardner: So, helping your clients cut the total cost of computing is what’s going to keep you in the hardware business in the long run?

Meyer: That’s exactly right. If you look at the overall benefits, the immediate allure of virtualization is all about the cost and the agility of the service. If you look at it from the bigger picture, if you get virtualization right, and you get it right from a strategic perspective, that’s when you start to feel those gains that we were talking about.

Data centers are very expensive. There's floor space in there. Power and cooling are very expensive. People are talking about that. If we help them get that right and knock the cost out of the infrastructure, the management, the client architectures, and even insourcing or outsourcing, that’s beneficial to everyone.

What are the payoffs?

Gardner: We've talked about how virtualization is a big deal in the market and how it’s being driven by economic factors. We've looked at how a tactical knee-jerk approach can lead to the opposite affect of higher expense and more complexity. We've recognized that taking an experienced, methodological, strategic approach makes a lot of sense.

Now, what is it that we can get, if we do this right? What are the payoffs? Do you have examples of companies you work with, or perhaps within HP itself? I know you guys have done an awful lot in the past several years to refine and improve your IT spend and efficiency. What are the payoffs if you do this right?

Meyer: There are a number of areas. You can look at it in terms of power and cooling. So right off the bat, you can save 50 percent of your power and cooling, if you get this right and get an infrastructure that works together.

From a client-computing perspective, you can save 30 percent off the cost of client computing, off the management of your client endpoints, if you virtualize the infrastructure.

If you look at outsourcing the infrastructure, the returns are manifold there, because you're really taking not just the cost of running it. You're actually leveraging the combined knowledge of thousands and thousands of people who understand how to run the infrastructure from the experience they have of doing multiple outsourcing.

So, we see particular gains in power and cooling, as I mentioned before, and the cost of administration. We'll see significant gains in server-admin ratios. We'll see a threefold increase in the number of servers that people can manage.

If you look across the specific examples, they really do touch a lot of the core areas that people are looking at today -- power and cooling, the cost of maintaining and instrumenting that infrastructure, and the cost of maintaining desktops.

Gardner: Doesn’t this help too, if you have multiple data centers and you're trying to whittle that down to a more efficient, smaller number? Does virtualization have a role in that?

The next generation

Meyer: Absolutely. Actually, throughout the data center, virtualization is one of those key technologies that help you get to that next generation of the consolidated data center. If you just look at from a consolidation standpoint, a couple of years ago, people were happy to be consolidating five servers into one or six servers into one. When you get this right, do it on the right hardware with the right services setup, 32 to 1 is not uncommon -- a 32-to-1 consolidation rate.

If you think about what that equates to, that’s 32 fewer physical servers, less floor space, less power and cooling. So, when you get it right, you go from, "Yes, I can consolidate and I can consolidate it five to one, six to one or 12 to one" to "I'm consolidating, and I am really having a big impact on the business, because I'm consolidating at 24 to 1 or 32 to 1 ratios." That’s really where the payoff starts coming in.

Gardner: I suppose that while you are consolidating, you might as well look at what applications on which platforms are going to be sunset. So, there's a modernization impact. Virtualization helps you move certain apps out to pasture, maybe reusing the logic and the data in the future. What’s the modernization impact that virtualization can provide?

Meyer: Virtualization is absolutely an enabler of that in a number of different ways. Sometimes, when people are modernizing apps, they go to our outsourcing business and say, "I'm modernizing an application and I need some compute capacity. Do you have it?" They can tap into our compute capacity in a virtual way to provide a service, while they're moving, updating, or modernizing an architecture, and the end user doesn’t notice the difference. There's a continuity aspect there, as they provide the application.

There are also the backup and recovery aspects of it. There are a lot of safeguards that come in while you are modernizing applications. In this case, virtualization is an enabler for that. It allows that move to happen. Then, as that application moves onto more up-to-date or more modern architecture, it allows you to quickly scale up or scale down the capacity of that application. Again, the end user experience isn't diminished.

Gardner: So, these days when we are not just dealing with the dollars-and-cents impacts of the economy, we are also looking at dynamic business environments, where there are mergers, acquisitions, bankruptcies, and certain departments being sloughed off, sold, or liquidated. It sounds like the strategic approach to virtualization has a business outcome in that environment too.

Meyer: That’s really where the sort of the flip side of virtualization comes in -- the automation side. Virtualization allows you to quickly spin up capacity and do a series of other things, but automation allows you to do that at scale.

If you have a business that needs to change seasonally, daily, weekly, or at certain times, you need to make much more effective use of that compute capacity. We talk a lot about cost, but it’s automation that makes it cost effective and agile at the same time. It allows you to take a prescribed set of tasks related to virtualization, whether that’s moving a workload, updating a new service, or updating an entire stack and make that happen much faster and at much lower cost, as well.

Gardner: One last area, Bob. I want to get into the benefits of managed virtualization as insurance for the future. You mentioned cloud computing a little earlier. If you do this properly, you start moving toward what we call on-premises or private clouds. You create a fabric of storage, or a fabric of application support, or a fabric of platform infrastructure support. That’s where we get into some of those even larger economic benefits.

This is a vision for many people now, but doing virtualization right seems to me like a precursor to being able to move toward that. You might even be able to start employing SOA more liberally, and then take advantage of external clouds, and there is a whole vision around that. Am I correct in assuming that virtualization is an initial pillar to manage, before you're able to start realizing any of that vision?

Meyer: Certainly. The focus right now is, "How does it save me money?" But, the longer-term benefit, the added benefit, is that, at some point the economy will turn better, as it always does. That will allow you to expand your services and really look at some of the newer ways to offer services. We mentioned cloud computing before. It will be about coming out of this downturn more agile, more adaptable, and more optimized.

No matter where your services are going -- whether you're going to look at cloud computing or enacting SOA now or in the near future -- it has that longer term benefit of saying, "It helps me now, but it really sets me up for success later."

We fundamentally believe, and CIOs have told us a number of times that virtualization will set them up for long-term success. They believe it’s one of those fundamental technologies that will separate their company as winners going into any economic upturn.

Gardner: So, making virtualization a core competency, sooner rather than later, puts you at an advantage across a number of levels, but also over a longer period of time?

Meyer: Yes. Right now everybody is reacting to an economic climate. Those CIOs who are acting with foresight, looking ahead and saying, "Where will this take me," are the ones who are going to be successful as opposed to the people who are just reacting to the current environment and looking to cut and slash. Virtualization has a couple of benefits that allow you to save and optimize, but also sets you up for that -- to boomerang you whenever the economic recovery comes.

Gardner: Well, great. We've been talking with Bob Meyer, the worldwide virtualization lead in HP’s Technology Solutions Group. We've been examining the effects and impacts of virtualization adoption and how to produce the best businesses and financial outcomes from your virtualization initiatives. I want to thank you, Bob, for joining us. It's been a very interesting discussion.

Meyer: Thank you for the opportunity.

****Access More HP Resources on Virtualization****

Gardner: We also want to thank our sponsor, Hewlett-Packard, for supporting this series of podcasts. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to BriefingsDirect. Thanks and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.