Wednesday, September 22, 2010

Data Center Transformation Includes More Than New Systems, There's Also Secure Data Removal, Recycling, Server Disposal

Transcript of a sponsored podcast discussion on how proper and secure handling of legacy equipment and data is an essential part of data center modernization planning and cost reduction.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion on an often-overlooked aspect of data center transformation (DCT), and that is what to do with the older assets inside of data centers as newer systems come online.

DCT is a journey, and an essential part of that process is modernizing, but at the same time sun-setting older systems must come with data protection in mind, and even with an eye to monetize those older systems, or at least recycle them properly.

Properly disposing of data, and other IT assets, is an often overlooked, and under-appreciated element of the total data center transformation journey, but it’s one that can cause great disruption and an increase in costs, if not managed well.

Compliance and recycling issues, as well as data security concerns and proper software disposition should therefore be top of mind. We'll take a look at how HP manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of new and older systems during a DCT process.

Indeed, many IT organizations are largely unaware of the security and privacy risks of the systems that they need to find a new home for and can often find themselves delivered to the wrong hands. So thinking through the retirement of older assets should be considered early in the DCT process.

With us now to explain how to best take care of the older systems reducing risk, as well as providing a financial return, are Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services. Welcome to the show.

Helen, let me start with you. As I mentioned, we've got a whole lifecycle to think about with DCT, but what’s driving the market right now? Where are the enterprises involved with DCT going, and how can we start thinking about the total picture in terms of how to do this well?

A total solution

Helen Tang: That’s a great question, Dana. HP really started marketing DCT as a total solution that spans hardware, software, and services throughout the entire life cycle in 2008, when we launched the solution. Since then, we've had about 1,000 customers take this journey to very successful results.

I would say 2010 is a very interesting year for DCT number one, because of the economic cycle. We are -- fingers crossed -- slowly coming out of this recession, and we're definitely seeing that IT organizations have a little bit more money to spend.

This time around, they don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

So, they've turned to vendors like HP and said, "What do we do differently this time and how do we go about it in a much more strategic fashion?" To that end, we brought together the entire HP portfolio, combining hardware, software, and services, to deliver something that helps customers in the most successful fashion.

When you look at the entire life cycle, some of them are starting with consolidation. Some of them already did a lot of virtualization, and they want to move to more holistic automation. So, throughout that entire process, as you mentioned earlier, there's a lot to think about, when you look at hardware and software assets that are probably aged and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. That’s why we brought Jim here to talk to you more about this.

Gardner: Helen, as I understand it, it's often a 10- or 20-year cycle, when you completely redo or transform a data center. So there probably aren’t people around with the skills to remember the last time their organization disposed of a data center. This is a fairly new activity, something you might need to look to outside help for.

Tang: Absolutely. Of course, there are different pieces to the DCT picture. When you say 10 or 20 years, that’s generally referring to the lifecycle of facilities. Within that, hardware usually lasts between 5 and 8 years. But the problem is that today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all this.

In HP, we're in a unique vantage point of being the number one technology company. We're about 300,000 strong in terms of headcount, and about half of those are consultants who have expertise across all these data center technologies. As I mentioned earlier, we've helped over 1,000 customers. So, we have a lot of practical hands-on experience that can be leveraged to help our customers.

Gardner: Jim O'Grady, it sounds as if the risk here is something that a lot of these organizations don’t appreciate. What are the levels of risk that are typical, if you wait until the last minute and don’t think through the proper disposal of your existing or older assets?

Brand at stake

Jim O'Grady: We're not trying to overstate that risk too much. But the risk may be that you are simply putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think. If you know how to do it correctly in one part of your enterprise and you are doing it differently in another part of your enterprise, you have a discrepancy that’s hard to explain, because you're not learning from yourself and you're not using best practices within the organization.

Also, a lot of our clients choose to outsource this work to a partner. They need to keep in mind that they are sharing risk with whomever they partner with. So, they have to be very cautious and be extremely picky about who they select as a partner.

You have to feel very comfortable that your partner’s brand is as respected as your brand, especially if you are defending what happened to your board of directors or, worse yet, you get into a legal proceeding. If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. Use the fewest number of vendors that can manage your business scale and geographic coverage requirements. This is an industry where low barriers to entry really just don’t match up to the high levels of customer accountability required to properly manage your end-of-use asset disposition.

We also suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning.



Gardner: So while we can have very significant risk, on the other hand we also know that there are proper, well-established, well-understood ways of doing this correctly. Even though the risk might not be understood, doing this right is possible.

Tell me some of the basic steps of doing this properly, where you don’t run into these risks, where you have thought it through or found those who have established best practices well under control.

O'Grady: Some of the best practices that we recommend to our customers is to have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

We also suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Vendor qualification

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

So, you need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Gardner: What about environmental issues? These can vary pretty widely. Maybe you don’t know about these risks, but as you say, you're going to be responsible for them. Tell me about the environmental and even recycling aspects of this equation.

O'Grady: You're right. That’s one of the most common areas where our clients are caught unaware of the complexity of the data security, the e-waste legislation requirements that are out there, and especially the pace of its change.

Legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



Gardner: Now, part of the process that you're addressing is not just what to do with these assets, but how to make the transition seamless. That is to say, your service level agreements (SLAs) are met and that the internal users or external customers for your organization don't know that there is this transition going on.

So, in addition to what we've talked about in terms of some of these assets and environmental and security issues, how can you also guarantee that all the lights stay on and the trains keep running on time?

O'Grady: HP Financial Services (HPFS) has a lot of asset management capabilities to bring to bear, to help customers through their DCT. A lot of it is financial-based capability and services and a lot of it is non-financial based.

Let me explain. From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand what their asset management strategy is, and help them to personalize the financial asset ownership model that makes sense for them.

For example, perhaps a client has a need to monetize some of their existing IT asset base. Let's just say there are existing data center assets somewhere. We have the asset-management expertise to structure a buyout for those targeted data-center assets and lease those same assets back to the client with a range of flexible terms, potentially unlocking some hidden capital for them to exploit elsewhere, perhaps funds for additional data center capacity.

Managing assets

T
hat's just one of many examples of how we help our clients manage their assets in a more financially leveraged way. In addition to that, customers often have a requirement to support legacy gear during the DCT journey HPFS can help customers with pre-owned legacy HP product.

We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. We're seeing a big uptake in the need to support legacy product, especially in DCT. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

Finally, customers should consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market.



In summary, most data center transformations and consolidations typically end with a lot of excess or end-of-use product. We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HPFS.

Gardner: Jim, you mentioned global a few times. Help me understand a little more about the global benefits that HP is bringing to the table here in terms of managing multiple markets, multiple regulation scenarios, and some of these other secondary market issues.

O'Grady: From what I see in the market, there are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Gardner: Let’s look at some examples. It’s one thing to understand this at a high level, but seeing it in practice is a lot more edifying and educational. So, I guess HP is a good place to start.

What did you learn when you had to take many different data centers from a lot of merger and acquisition activity in many different markets? Tell me a little about the story of HP’s data centers, when it comes to properly sun-setting and managing all of those assets.

Creating three data centers

O'Grady: First, let me explain what were up against in terms of the complexities of consolidating HP’s data centers. There were about 85 worldwide data centers located in 29 different countries, and we were consolidating down to 6 data centers within three U.S. geographical zones. Essentially, we were creating three prime data centers and three backup data centers.

HP decommissioned over 21,000 assets over a one-year period. In addition, there were another 40,000 data center related assets. The majority of the data center servers were being ported to new technologies. So, this really left a tremendous amount of IT product to be decommissioned, retired, and to recover value for.

HPFS was asked to come in and take control of the entire global reverse logistics, the off-site data security plan, as well as the asset re-marketing and recycling process.

The first thing that we had to do was establish a global program office with dedicated support. This is almost required for every larger global asset recovery project in the market, so there can be a single coordination point and focus to make all the trains run on time, so to speak, and to manage and reconcile a single financial and asset reporting process. This is critical.

HP wanted to have one place, one report, with reconciled asset level detail that demonstrated that every single asset came back was audited, wiped of data, and was recycled in an environmentally compliant way.

We also needed to set up a global reverse logistics strategy that proved to be extremely challenging as well. HP had SWAT teams deployed for time-critical de-installs in some of the smaller remote locations. They needed us to position secured truck vehicles to be there within a one-hour window, where there was no local storage available to store the data center equipment as it came out of the data center.

We were able to hand back apparent net recovery, versus a bill for our services. They were quite pleased with that result.



We also had to act as a backup for data eradication. HP’s policy was to wipe the data on-site, but they realized that that’s not always a perfect process, and so they wanted us to again wipe all of the equipment as it came back into our process.

Last, but not least, recovering value for the older end-of-use assets was one of our highlighted strengths. We found 90 percent of the decommissioned data center products to be re-marketable, and that’s not unusual in the market. We were able to hand back apparent net recovery, versus a bill for our services. They were quite pleased with that result, and it must have worked out more than okay, because I am still around to describe what we did.

Gardner: I can see now why this is under Financial Services. This really is about auditing, making sure that every asset is managed uniquely and fully, and that nothing falls through the cracks. But, as you point out, there is this whole resale, with the regulation and recycling issues to be managed as well.

Tell me a little more about this. Obviously for HP, you were doing it for your own good. Are there some other examples that we can look to where that bill has been much lower because of the full exercise of your financial secondary market activities?

A key strength

O'Grady: Sure. That’s where we think our strength is. If you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

Gardner: While the IT folks don’t necessarily always want to think about the financial implications, they're certainly more likely to get the okay to move ahead with their technical transformations, to get the new tools and systems that they want, if they can better appreciate that there is a way to recover costs safely from existing systems. I think it's probably an important lesson for IT people that they might not have thought of.

Helen, do you have any thoughts about that, about the culture between a financial side and the technical side in DCT?

The customer asked us to recreate their old data center environment at that new location, with the exact legacy spectrum of the original data center.



Tang: We're seeing some interesting shifts right now. So, you're right. Classically, there was this conception that the CFO’s office didn’t understand the needs of IT, and IT didn’t understand how best to save money, and also optimize for those tricky CAPEX and OPEX issues.

But, in the last five years, we're seeing a shift, and these two organizations are working harder together. A lot of it is because they have to do it, with the recession, and a lot of the limitations. But, we're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Gardner: Thanks to Jim, we've heard quite a bit about HP's story. Are there some other users out there who have gone through this, and that we can look to for some more understanding of how this should apply to future DCT? Back to you, Jim?

O'Grady: Sure. There was a case that involved an internationally known food services company that had a data center consolidation and move requirement. This company had 12,000 locations in 35 countries. Their basic need was to economically migrate to a new facility, where there was room for expansion and consolidation.

Their existing server environment was only a couple of years old, and it wasn’t economical for them to replace them at this point in time. So, the customer asked us to recreate their old data center environment at that new location, with the exact legacy spectrum of the original data center.

Legacy solution

We were more than happy to provide them with a highly customized HP legacy server solution, that was identical to their existing equipment, and we did it on a short-term rental basis. Once the new cutover data center was fully operational, we simply brought back all of the equipment located in the original data center.

We did it as a trade for the legacy servers that were rented and installed at the new data center site. Essentially, it was an asset swap. We rented some equipment to them. We brought back their original data center. And, we just called it a day. So it was an extremely pleasant and easy transition for the customer to get through.

We also helped the customer manage the license transfers from their prior owned servers to the ones that we just provided to them.

Gardner: These are things that organizations on their own probably don’t have the visibility and understanding to pursue. Is it fair to say, Jim, that a lot of companies are just leaving money on the table, so to speak, if they try to do this themselves, if they look for some of those local secondary market folks, and are maybe not as creative as they could be in some of these financial approaches to the best outcome for these assets?

O'Grady: I think they do. They typically try to disconnect some of this activity and they don’t put it into a holistic view in terms of the DCT effort. Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

Gardner: I suppose there is a scale issue too. An organization like HP can absorb a football field full of laptops, whereas not every other entrant in the market, at least the local players involved, can absorb that sort of scale. So tell me a little bit about the size issues, both scaling up, in terms of the largest types of data centers and IT issues, but also perhaps scaling down in terms of where specialization or even highly vertical systems are involved?

O'Grady: That’s a good point. Especially in the large DCTs, you're getting back a lot of enterprise equipment, and it's easy to re-market it into the secondary market and recover value. You could put it on the market tomorrow and recover very minimum value, but we have a different process.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Gardner: Tell me how you get started on this Jim? As we said, this is sort of a black box for a lot of people. The IT people don’t understand the financial implications, and the financial folks might not understand what’s involved when a data center is going to be transformed and what’s coming down the avenue that they need to then disposition.

So, how do we merge these cultures? How do they get started, and who do you target this information at? Is there a Chief Disposition Officer? I tend to doubt it. Who should be involved? Who should be in charge?

C-level engagement

O'Grady: We recommend that a C level executive is in charge, whether it's the CIO, the CFO, or the Security Officer. Someone at a very high level should be engaged. To engage us is very simple. Your best bet is to contact your HP account manager. They would know how to get in contact with us to bring our services to bear.

You can also look at, under HP.com, and get to HPFS, and that’s really simple. From there, it's an easy process to engage us. If you're looking for pre-owned or rental equipment, we would provide a dedicated rep, or you can just access this product through most authorized HP enterprise resellers. They know how to access legacy product from us as well.

Asset recovery is a much more complex engagement. We start with a consultation with the client on a whole range of topics to educate them on what the whole used IT disposition market is all about, especially things to watch out for. There are things like comparing price versus risk, when you are selecting a vendor, and what to think about to ensure you comply with the emerging legislation and directives that you need to be aware of and deal with, especially environmental waste stream management, data security, and cross-border product movement of non-working IT products.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Gardner: Helen, it sounds as if there is a whole lot going on with this cleansing and regrouping of assets in such a way that you get the most financial return. To me this is a real educational issue for the DCT process and that this needs to be really considered early on and not as an afterthought.

Tang: That’s absolutely true, which is why we reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.

Gardner: Back to you one last time, Jim. Tell us a little more about how people should envision this? If we're in the process, what’s the right frame of mind -- philosophy, if you will -- of this disposition element to any DCT?

Well-established plan

O'Grady: My advice is to have a well-established plan in the budget and think about that way up front in the process. This is the one area that most of our clients fail to do. What happens is that, at the disposition point, they accumulate a lot of assets and they haven’t budgeted for how to disposition those assets. They get caught, so to speak.

Customers should educate themselves about the market complexities involved with dispositioning your own products from the data security standpoint, as well as environmental legislation.

As you try to recover value in the secondary market, you own the result of that transaction. So, you could be putting your company brand at risk, if you're not complying with the morass of legislative directives and regulations that you find out there at a global and local level.

Gardner: We've been hearing about trying to make the most productive transitions with data center assets and transformation activity from the vantage point of security and environmental impact, recycling, and resale. I suppose the idea here now is to get your older assets out with low risk, but also get the highest financial return from them as well.

I want to thank our panelists for helping us sort this out. We have been here with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O’Grady, Director of Global Life Cycle Management with HP Financial Services. Thanks so much, Jim.

O'Grady: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast discussion on how proper and secure handling of legacy equipment and data is an essential part of data center modernization planning and cost reduction. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Thursday, September 09, 2010

Want Client Virtualization? Time Then to Get Your Back-End Infrastructure Act Together

Transcript of a sponsored podcast discussion on present benefits and future trends for client virtualization.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Welcome to a sponsored podcast discussion on client virtualization strategies and best practices. We've heard a lot about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.

But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change -- both business and economic.

Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.

Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client support from servers will become more the norm and less the exception over time.

Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces.

The new requirements for businesses point to the need for planning and proper support of infrastructures that can accommodate these clients, and to do so with an attractive long-term price-point.

So we're here to discuss how client virtualization infrastructure works, where it fits in to support multiple future client directions, and why it makes a tremendous amount of sense economically.

Here with us to explain how to take better advantage of client virtualization is Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP's Industry Standard Servers Organization. Welcome to BriefingsDirect, Dan.

Dan Nordhues: Thank you.

Gardner: Tell us why we're in such a dynamic market now. What’s going on with clients, and why do we seem to be having a mismatch between applications and user devices, and the support services on the back-end?

Consistent reasons

Nordhues: I can start by describing why now is the time to take a look at this, and some of the dynamics going on. There are quite a few, and different customers will resonate more with some than others.

The major reasons are pretty consistent. We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.

It’s a digital generation of millions of new folks entering the workforce, and they've grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.

Then, there is just the burden of managing those desktops as they become more and more distributed underneath desks. And, of course, there's the impact of security, which is always the highest on customer lists. The average cost of a security breach in the U.S. is almost $7 million. Over 10,000 laptops go missing from airports every week. These are just some of the things that are pressing for a different approach.

And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.

Gardner: One thing that’s interesting about the client virtualization decision process is that you really need to think across multiple domains. It’s not just about the client or even the network. You have to think about storage, server, resources, and the culture -- the fact that many different people groups need to be cooperating.

Nordhues: Yes, that’s definitely true. If you look at organizations today that have a desktop environment, distributed PCs, or workstations at the desk, those organizations doing the IT there are typically different than the data-center folks managing the server environment. When we go to client virtualization, you still have a user endpoint out where the user is at the desk or mobile, but you've got the applications and the desktops actually running on servers in the data center.

So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.

You also have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.

So it’s a different kind of a discussion. We've found that the most successful approach is to include all of those folks in the conversation from the very beginning, as you look at doing desktop virtualization. Otherwise, as you go toward deployment or integration toward deployment, you're going to run into problems if you haven’t taken that step early-on.

Gardner: Is there a significant amount of waste in the way things are done now in terms of redundancy or under-utilization? Is there an additional efficiency opportunity -- when you take this total client support picture into consideration?

Unused CPU cycles

Nordhues: Certainly. Customers tell us all the time that they believe that they have a lot of untapped CPU cycles, for example, at the desktops. As we look across the processors going into these systems, they are at least dual-core.

We have capability up past quad-core and 6-core today. Typical PC users, a very broad section of the users, don’t really require all of that performance. We even have customers trying to tap into those unused cycles across the network at night, for example.

When you put that into the data center, it’s much more manageable. Customers who are deploying see 40-60 percent improvements in IT productivity. That’s really what we're after -- IT productivity measured in terms of how many users each IT person can support.

This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.

The reason the manageability is easier is that the tools in the data center side, on the server side, are much more robust than what customers typically have deployed on the desktop side.

Gardner: You mentioned security a little earlier. It’s interesting to me that people often think of things delivered as a service as less secure, but in the case of clients and data, having that all on the server, managed centrally, with permissions, policies, rules, and governance, tends to be more secure than the hard drive full of systems and data out and about in the workspace.

With client virtualization, the security is built in. You have everything in the data center.



Nordhues: Exactly. That’s definitely a primary consideration here. When we do surveys of customers, it almost always comes back to security as being the top consideration.

We have customers out there, large enterprise accounts, who are spending north of $100 million a year just to protect themselves from internal fraud. We're all very familiar with the privacy policies that we get in the mail, and companies are have to comply with increasingly strict rules to protect the consumer.

With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.

It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don't want, like MP3 players or games, on top of your IT infrastructure.

In the desktop virtualization area, what really comes out to the user device now is just pixel information. Think about extending your monitor cable to anywhere in the world you want to go, along with your keyboard and mouse. You really don’t have access to the files. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.

Future proofing

Gardner: Dan, tell me about the notion of future-proofing or protecting yourselves. We are hearing a lot about cloud computing nowadays. Service-oriented architecture is becoming more popular. Virtualization is the driver in many instances.

But we're also seeing a proliferation of different types of endpoints -- smartphones, tablets, netbooks, and so forth. There seems to be a drive toward a more singular server logic application base that then presents in a variety of different ways, depending on what the call is.

Tell me what you think the enterprise is going to need to do about that, and does client virtualization set you up to perhaps be in a better position to anticipate some of these new directions?

Nordhues: Let’s start at the user endpoint and then we can go back to the cloud side or typically what’s going to sit in the data center. At the user endpoint, as I mentioned, with an increasingly global and mobile workforce, there is a variety of devices that users are going to want to work on, but we have to be pragmatic about this as well.

You just can’t get the same experience on a smartphone with a three-and-a-half-inch screen that you can get if you're sitting in the office with a nice 22-inch LCD monitor, for example. The mouse input, the human interface, is not as good of an experience typically as what you have at a desk side access point, like a thin client.

When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year.



On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We're on a path where the hardware pieces are there to deliver on that.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year. These are things like how to charge the departments for their usage, how to measure how many virtual machine hours exist in your environment, for example, and how to do billing to pay for your IT, or whatever infrastructure you need within your customer environment.

A lot of that is coming. The hardware and the ability to provision that are there today. I'm talking more about truly getting to the cloud, where you have services packaged up and delivered on demand to the user community.

Gardner: It seems that if you can take this approach toward wire it once, you're in a position to do a variety of things in terms of where your workforce and requirements are going, but you can also support your legacy in older applications, so you have covered your backward compatibility, too.

What’s preventing people from doing this? What do they need to do to think about getting started, if you understand that managing that infrastructure portion is going to help solve some of these issues as you move out toward the edge, considering what applications to keep, and what perhaps cloud services and hybrid services to bring in?

Aligning expectations

Nordhues: When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment and make sure that you're doing a proper analysis and are actually ready. The expectations have to be aligned.

We hear from customers who think that client virtualization is magic for some reason, and that their boot times are going to go to under 20 seconds. That’s just not true. While it can certainly help, you have to have the right expectations going in. You have users with peripherals at the desk that are important to running their job. Are they still going to have access to those peripherals, when they go to a VDI environment, for example?

Probably the most important thing where we have seen some customer missteps in the past, is really understanding user segmentation. Which of your users are really light task-oriented workers? They maybe touch a couple of applications primarily, and that’s the majority of what they do. At the other end of your spectrum, you have your power users using four or five times as many I/O operations per second.

Gardner: Dan, given the fact that you've got so many of these variables to manage, not just a matter of flipping the switch and going to VDI, are there any reference architectures or places where people can begin to look at this larger, more comprehensive approach?

Nordhues: Certainly. This is where HP is squarely headed. We've launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.

A lot of the deployment issue, and what makes this difficult, is that there are so many choices.



For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.

In this reference architecture, we've done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.

What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all this for you. Here is the server and storage and all the way out to the thin client solution. We've tested it. We've engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment." So, you take that system integration task away from the customer, because HP has done it for them.

Where we're headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you're an office worker and you're just getting applications virtualized out to you. You're going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.

Maybe, you're more of a power user, and you need that whole desktop environment provided by VDI. We'll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.

Crawl, walk, run

Gardner: Correct me, if I am wrong, but this is something you can bite off in somewhat of a manageable chunk. As they say, there is a crawl-walk-run approach to this, where you will start based on a certain class of user within your organization or an overriding security concern. Perhaps you can explain to me how this develops organizationally?

Nordhues: Certainly. We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.

We're looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it come completely packaged.

Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-once infrastructure that can deliver whatever the needs are for your broad user community.

Gardner: This really sounds like an ecosystem type of an affair, where there are so many different players: hardware, software, storage, services, and changing methodologies. How do you classify this? Is this something that you would call systems integration? How do we put a label on what it needs or what it takes to go from your current state to a client-virtualized state?

Getting agreement and moving forward is a very time-consuming and risky part of this, because it hasn't necessarily ever been tested together in exactly the configuration that the customer might end up settling on deploying.



Nordhues: You've touched on the experience of moving this into an organization for the first time. We mentioned earlier on the call, you get a lot of different IT folks involved in the data center. There are separate folks for storage, networking, and for servers. And, you've got the desktop, traditional folks from out in the user environment.

You're bringing in all these servers, storage, software choice, and software stack choice, technology. Getting agreement and moving forward is a very time-consuming and risky part of this, because it hasn't necessarily ever been tested together in exactly the configuration that the customer might end up settling on deploying. That does introduce risk into the equation.

In addition to the reference architectures, where we have taken that system integration, responsibility, or task largely away from the customer and made it easier, HP also has total set of services around client virtualization, and we can do a number of things. We come in and have a strategy discussion with you. Are you really ready in your environment to do this? Here are some of the things you should consider. Here’s what you need to get nailed down before you really start.

Then, we can even do assessments, where we determine how ready you might be, which user community you maybe should be started with first, and then all the way to complete deployment services, if you want that.

Also, HP, instead of selling you the servers and the solution for you to deploy yourself, can even take it to a managed services’ standpoint, where if you just want to tell HP, "We'd like to turn on 500 users next month," we can make it happen.

Gardner: That sounds to me a bit like a private cloud or even hybrid cloud types of services. Do you save any cycles, Dan, in moving towards client virtualization by already having a private cloud or virtualization or hybrid cloud strategy in place?

Cloud strategy

Nordhues: Certainly, you need to take a look at how these might fit together with the cloud strategy that you already have in place. Just over the last six to nine months, the whole cloud area has erupted, if you read what’s out there on the web or in trade magazines. There’s a lot of new technology coming. I referred to a part of it before, where there are more software solutions coming around deploying services, instead of just provisioning the hardware, for example.

For customers who already have a solution in place, it’s very much worth taking a look at what’s coming -- specifically, what HP can offer and how it might marry with what you have deployed.

But, that’s a very broad topic, and "your mileage may vary," I guess, is the way to talk about it, because it really is dependent on exactly what you have deployed and how you have it deployed. You have to ask if there's a better way to do this?

Gardner: When we think about examples, it’s always nice to tell, but to show is even more impactful. Do you have some examples that we could point to about folks who have taken the plunge, perhaps for security reasons, and moved into client virtualization? What sort of experience have they had? Do you have any metrics for success or for economic impact?

Nordhues: We have a number of customer references. I won’t call them out specifically on the podcast here, but we do have some of these posted out on HP.com/go/clientvirtualization, and we continue to post more of our customer case studies out there. They are across the whole desktop virtualization space. Some are on server-based computing or sharing applications, some are based on VDI environments, and we continue to add to those.

With any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different.



HP also has an ROI or TCO calculator that we put together specifically for this space. You show a customer a case study and they say, "Well, that doesn’t really match my pain points. That doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy, power issue."

We created this calculator, so that customers can put in their own data. It’s a fairly robust tool, but we can put in information about what’s your desktop environment costing you today, what would it cost to put in a client virtualization environment, and what you can expect as far as your return on investment. So, it’s a compelling part of the discussion.

Obviously, with any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different, which is why we have provided the tool and the consulting around that.

Gardner: Dan, we've focused today on the infrastructure side quite a bit, and we've talked about how the proliferation of device types is well along. I suppose it also makes sense to mention the fact that HP has several lines of thin clients that fit into this scenario as well.

Thing-client side

Nordhues: Absolutely. As I mentioned before, these reference architectures that we have are not just on the data-center side, but they include specific models on the thin-client side as well.

Whether you're talking about the thin client or the user access side, all the way to the data center, HP has the whole area covered today with our solution. HP is number one in thin clients in the world. We've got a whole lineup architected toward different use cases, whether your full VDI or your very high-end engineer or trader need a high-end thin client that delivers the user experience, all the way down to mobile thin clients that look very much like laptops, but don’t have the hard drive and the sensitive data in them.

That’s why HP believes we can position ourselves as a leader here in client virtualization, not just because of the data center to thin client access and the products and solutions we bring to bear, but our partnerships with the major software vendors: Microsoft, Citrix, VMware, and all the services, all the way out to managed services, if you want HP to come in and do this implementation for you.

Gardner: You've mentioned some examples that people can look to. Are there any other places to get started, any other resources or destinations that would offer a pass into either more education or actual implementation?

Nordhues: On that same website that I mentioned, HP.com/go/clientvirtualization, we have our technical white papers that we've published, along with each of these reference architectures.

I would recommend that as a next step to anybody who is seriously considering or getting close to kicking the tires or thinking about a deployment.



For example, if you pick the VDI reference architecture that will support 1,000-plus users in general, there is a 100-page white paper that talks about exactly how we tested it, how we engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix XenDesktop.

It takes you through how we did the testing and the methodology, what the gotchas are -- don’t do this, do it this way. And that can be tremendous reading for the IT manager, the CIO, who is struggling to wrap their brain around this and try to figure out how we can make this work, because it’s very descriptive in how you can put this together, and some of the choice points you should and shouldn’t make.

I would recommend that as a next step to anybody who is seriously considering or getting close to kicking the tires or thinking about a deployment.

Gardner: Of course we can't pre-announce things, but it’s my understanding that there’s quite a bit of activity within HP, a whole area beyond what we have talked about. Perhaps you could pique our interest a bit about what to expect later this year.

Nordhues: I've alluded to it a little bit already on the call. We're moving consistently toward our converged infrastructure story, the wire once and then have the services delivered. As we go through the next month and into the next year, you're going to see more of that, filling out reference architectures, lowered toward small and medium businesses, as I mentioned.

Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.

And, we're going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.

Gardner: It sounds very exciting. We'll look forward to hearing more about that.

We've been here today, talking about client virtualization infrastructure, how it works, where it fits in, and how it can support multiple future directions, as well as support the legacy and existing requirements. To help us through this, we have been joined by Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP’s Industry Standard Servers Organization.

Thanks so much, Dan. That was really very interesting.

Nordhues: Thank you very much for the opportunity.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for joining us, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast discussion on present benefits and future trends for client virtualization. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, August 31, 2010

Explore the Myths and Means of Scaling Out Virtualization Via Automation Across Data Centers

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the improved and increased use of virtualization in data centers. We'll delve into how automation and policy-driven processes and best practices are offering a slew of opportunities for optimizing virtualization. Server, storage, and network virtualization use are all rapidly moving from points of progress into more holistic levels of adoption.

The goals are data center transformation, performance and workload agility, and cost and energy efficiency. But the trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization. Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

In some cases, levels of virtualization are stalling at 30 percent adoption, yet other data centers are leveraging automation and best practices and moving to 70 percent and even 80 percent adoption rates. By taking such a strategic outlook on virtualization, we'll see how automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

Here now to discuss how automation can help you achieve strategic levels of virtualization adoption are our guests, Erik Frieberg, Vice President of Solutions Marketing at HP Software. Welcome to BriefingsDirect, Erik.

Erik Frieberg: Great. Good to be here.

Gardner: And, we're here with Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. Welcome, Erik Vogel.

Erik Vogel: Well, thank you.

Gardner: Let's start the discussion with you Erik Frieberg. Tell me, why there is a misconception about acceptable adoption levels of virtualization out there?

Frieberg: When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

IT is in the process of serving the business, and the business is asking for whole application service provisioning. So it's not just these individual elements, but tying them all together along with middleware, databases, objects and doing this whole stack provisioning.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.

Gardner: Is there something in general that folks don't appreciate around this level of acceptable use of virtualization, or is there a need for education?

Perceptible timing

Frieberg: It comes down to what I call the difference in perceptible timing. Often, when businesses are asking for new applications or services, the response is three, four, or five weeks to roll something out. This is because you're automating individual pieces but it's still left to IT to glue all the individual element automation together to deliver that business service.

As companies expand their use of automation to automate the full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

Gardner: I see. So we're really moving from a tactical approach to a strategic approach?

Frieberg: Exactly.

Gardner: What about HP? Is there something about the way that you have either used virtualization yourselves or have worked with a variety of customers that leads you to believe that there is a lot more uptake? Are we really only in the first inning or two of virtualization from HP's perspective?

Frieberg: We're maybe in the second inning, but we're certainly early in the life cycle. We're seeing companies moving beyond the traditional automation, and their first goal, which is often around freeing up labor for common tasks.

Companies will look at things like how do they baseline what they have, how they patch and provision new services today, moving on to what is called deployment automation, and the ability to move applications from the development environment into the production environment.

They're asking how do I establish and enforce compliance policies across my organization.



You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

So we're maturing and moving to further "innings" by automating the process more and also getting further benefits around compliance and best practices for use through our automation efforts.

Gardner: When you can move in that direction, at that level, you start to really move into what we call data center transformation, rather than spot server improvements or rack-by-rack improvements.

Frieberg: Exactly. This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Gardner: Erik Vogel, are there other misconceptions that you’ve perceived in the marketplace in terms of where virtualization adoption can go?

Biggest misconception

Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Gardner: Erik Frieberg, back to you. Perhaps there is a laundry list of misconceptions that we can go through and then discount them. If we're going to go from that 30 percent into that strategic level, what are some specific things that are holding people back?

Frieberg: When I talk to customers about their use of virtualization, you're right. They virtualize the easy stuff.

The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

Gardner: Let's now get into what we mean by automation. How do you go about automating in such a way that you don't fall into these traps and you can enjoy the things that you've been describing in terms of better compliance, better process, and repeatability?

Provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT . . . is really moving beyond what a lot of people can manage.



Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate. We don’t need to provision a PC 16 different ways. We actually can do it one way with three variations. When you do that, you now move up the ability to automate that process. Then, you use that individual process automation or element automation in the larger process, and tie it all together.

That’s how we see companies or organizations moving up this maturity curve within automation.

Gardner: I was intrigued by that RBA example you gave. There are occasions where folks think they're automated, but are not. Is there a way to have a litmus test as to whether automation is where you need to go, not actually where you’ve been?

The easy aspects

Frieberg: Automation is similar to the statistics you gave in virtualization, where people are exploring automation and they're automating the easy aspects, but they're hitting roadblocks in understanding how they can drive automation further in their organization.

Something I have used as a litmus test is that run book. How thick is it now and how thick was it a month ago or a year ago, when you started automation? How have you consolidated it through your automation processes?

We see companies not just trying to standardize, consolidate, or make tough choices that will enable them to push the automation further. A lot of it is just a hard-held belief of what can be automated in IT versus what can't. It's very analogous to them approaching virtualization -- I can do these types of workloads, but not these others. A lot of these beliefs are held in old facts and not based on what the technology or new software solutions could do today.

Gardner: So, perhaps an indication of where they are actually doing automation is is that run book is getting smaller?

Frieberg: Exactly. The other thing I look at, as companies start to roll out applications, is not just the automation, but the consistency. You read different facts within the industry. Fifty percent of the time, when you make a change into your environment, you cause an unforeseen downstream effect. You change something, but something else breaks further down.

They look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.



When you automate processes, we tend to see that drop dramatically. Some estimates have put the unforeseen impact as low as five percent. So, you can also measure your unforeseen downstream effects and ask, "Should I automate these processes that seem to be tedious, time-consuming, and non-compliant for people to do, and can I automate them to eliminate these downstream effects, which I am trying to not have occur in my organization?"

Gardner: Erik Vogel, when these folks recognize that they need to be more aggressive with automation in order to do virtualization better, enjoy their cost performance improvements, and ultimately get towards their data center transformation, what is it that they need to be thinking of? Are performance and efficiency the goals? How do we move toward this higher level of virtualization?

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the impediments, unfortunately, is that there is kind of an economic hold. The way we're creating these business cases today doesn't show the true value and benefit of enhanced virtualization automation. We need to rethink the way we put these business cases together to really incorporate a lot of the bigger benefits that we're seeing with clients who have moved to a higher percentage of virtualization.

Gardner: In order to attain that business benefit to make the investment a clear winner and demonstrate the return what is it that needs to happen? Is this a best-of-breed equation, where we need to pull together the right parts? Is it the people equation about the operations, or all of the above? And how does HP approach that stew of different elements within this?

All of the above

Vogel: It's really all of the above. One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

If we're just looking at virtualizing and just moving from physical to virtual devices, we may be reducing our asset footprint and gaining the benefits of just managing fewer physical assets. From a logical standpoint, we still have the same number of servers and the same number of OS instances. So, we still have the same amount of complexity in managing the environment.

The benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.

But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.



Gardner: It sounds like we're moving beyond that tactical benefit of virtualization, but thinking more about an operational fabric through which to support a variety of workloads -- and that's quite a leap.

Vogel: Absolutely. In fact, when we start talking about moving to a cloud-type environment, specifically within public cloud and private cloud, we're looking at having to do that process work and governance work. It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. And then, how do we de-provision, when we're done with a particular environment; and how do we enable that environment to scale up and scale down, based on the demands of the workloads that are being run on that environment.

So, it's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

Gardner: And yet, it's at that higher level of adoption that the really big paybacks kick in. Are there some misconceptions or some education issues that are perhaps holding companies back from moving toward that larger adoption, which will get them, in fact, those larger economic productivity and transformative benefits?

Lifecycle view

Vogel: The biggest challenge where education needs to occur is that we need to be looking at IT through a lifecycle view. A lot of times we get tied up just looking at an initial investment or what the upfront cost would be to deploy one of these environments. We're not truly looking at the cost to provide that service over a three-, four- or five-year period, because if we start to look carefully at what that lifecycle cost is, we can see that these shared environments, these virtualized environments with automation, are a fraction of the cost of a dedicated environment.

Now, there will need to be an upfront investment. That, I think, is causing a lot of concern for our clients because they look at it only in the short-term. If we look at it over a life-cycle approach and we educate clients to start seeing the cost to provide that service, that's when we start to see that it's easy to make a business case for moving to one of these environments.

It's a change in the way a lot of our clients think about developing business cases. It's a new model and a new way of looking at it, but it's something that's occurring across the industry today, and will continue to occur.

Gardner: I'm curious about the relationship that you’re finding, as the adoption levels increase from net 30 percent to 60 or 70 percent. Are the benefits coming in on a linear basis as a fairly constant improvement? Or is there some sort of a hockey-stick effect, whereby there is an accelerating level of business benefits as the adoption increases?

Vogel: It really depends on the client situation, the type of applications, and their specific environment. Generally, we're still seeing increasing returns in almost a linear fashion, as we move into 60-70 percent virtualized.

Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment.



As we move beyond that, it is client-specific and client-independent. There are a lot of variables and a lot of factors in play, such as the type of applications that are running on it and the type of workloads and demands that are being placed on that environment. Depending on the clients, they can still see benefits when they're 80-85 percent virtualized. Other clients will hit that economic threshold in the 60-65 percent virtualized range.

We do know that we're continuing to see benefits beyond that 30 percent, beyond the easy stuff, as they move into Tier 1 applications. Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment. As applications continue to modernize and are better able to use virtual technologies, we'll see that threshold continue to increase into the 80-85 percent range.

Gardner: How about the type of payoff that might come as companies move into different computing models? If you have your sights set on cloud computing, private cloud or hybrid cloud at some point, will you get a benefit or dividends from whatever strategic virtualization, governance and policy, and automation practices you may inherit now?

Vogel: I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Now, with that said, we've also seen instances where costs have actually increased as a result of cloud implementation, and that's generally because the governance that was required was not in place. If you move to a virtual environment that's highly automated and you make it very easy for a user to provision in a cloud-type model and you don’t have correct governance in place, we have actually seen virtual server sprawl occur.

Everything pops up

All of a sudden, everybody starts provisioning environments, because it's so easy and everything is in this cloud environment begins to pop up, which results in increased software licensing costs. Plus, we still need to manage those environments.

Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Gardner: So, there is a relationship between governance and automation. You really wouldn’t advise having them separate or even consecutive? They really need to go hand in hand?

Vogel: Absolutely. We've found in many, many client instances, where they've just gone out, procured hardware, and dropped it on the floor, that they did not realize the benefits they had expected from that cloud-type hardware. In order to function as a cloud, it needs to be managed as a cloud environment. That, as a prerequisite, requires strong governance, strong process, security controls, etc. So, you have to look at them together, if you're really going to operationalize a cloud environment, and by that I mean really be able to achieve those business benefits.

Gardner: Erik Frieberg, tying this back to data-center transformation, is there a relationship now that's needed between the architectural level and the virtualization level, and have they so far been distinct?

I guess I'm asking you the typical cultural question. Are the people who are in charge of virtualization and the people who are in charge of data center transformation the same people talking the same language? What do they need to do to make this more seamless?

When you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.



Frieberg: I’ll echo something Erik said. We hear clients talk about how it's not about virtualizing the server, but it's about virtualizing the service. This is where we look at virtualizing a single server and putting it into production by cloning it is relatively straightforwardly. But, when you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.

You get server people involved around provisioning. You’ve got network people. You’ve got storage people. Now, you're just talking about the infrastructure level. If you want to put app servers or database servers on top of this, you have those constituents involved, DBAs and other people. If you start to put production-level applications on there, you get application specialists.

You're now talking about almost a dozen people involved in what it takes to put a service in production, and if you're virtualizing that service, you have admins and others involved. So, you really have this environment of all these people who now have to work together.

A lot of automation is by automating specific tasks. But, if you want to automate and virtualize this entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.

The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So, there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.

Gardner: Erik Vogel, it sounds as if this allows for a maturity benefit, or a sense of maturity around these virtualization benefits. This isn’t a one-off. This is a repeatable, almost a core, competency. Is that how you are seeing this develop now? A company should recognize that you need to do virtualization strategically, but you need to bake it in. It's something that's not going to go away?

Capability standpoint

Vogel: That's absolutely correct. I always tend to shy away from saying maturity. Instead, I like to look at it from a capability standpoint. When we look at just maturity, we see organizations that are very mature today, but yet not capable of really embracing and leveraging virtualization as a strategic tool for IT.

So, we've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components. It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.

Gardner: Erik Frieberg, moving toward this category of being a capability rather than a one-off, how do you get started? Are there some resources, some tried and true examples of how other companies have done this?

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months.



Frieberg: At HP Software, we have a number of assets to help companies get started. Most companies start around the area of automation. They move up in the same six-level model -- "What are the basic capabilities I need to standardize, consolidate, and automate my infrastructure?"

As you move further up, you start to move into this idea of private-cloud architectures. Last May, we introduced the Cloud Service Automation architecture, which enables companies to come in and ask, "What is my path from where I am today to where I want to get tomorrow. How can I map that to HP’s reference architecture, and what do I need to put in place?"

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Gardner: Erik Vogel, how about future trends? Are there any developments coming down the pike that you can look in your crystal ball and say, "Here are even more reasons why that capability, maturity, and strategic view of virtualization, looking toward some of the automation benefits, will pay dividends?"

The big trend

Vogel: I think the big trend -- and I'm sure everybody agrees -- is the move to cloud and cloud infrastructures. We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

A major shift that we're seeing in IT is being facilitated by this notion of cloud. IT now has a lot of options in how they procure and source services, and they are now becoming that broker for these services. That’s probably the biggest trend and a lot of it is being driven by this transformation to more cloud-type architectures.

Gardner: Okay, last word to you Erik Frieberg. What trends do you expect will be more of an enticement or setup for automation and virtualization capabilities?

Most people, when they look at their virtualization infrastructure, aren’t going with a single provider.



Frieberg: I'd just echo what Erik said and then add one more aspect. Most people, when they look at their virtualization infrastructure, aren’t going with a single provider. They're looking at having different virtualization stacks, either by hardware or software vendors that provide them, as well as incorporating other infrastructures.

The ability to be flexible and move different types of workload to different virtualized infrastructure is key so having this choice, because that makes you more agile in the way you can do things. It will absolutely lower your cost, providing them the infrastructure that really leads to the higher quality of service that IT is trying to provide to the end users.

Gardner: It also opens up the marketplace for services. If you can do virtualization and automation, then you can pick and choose providers. Therefore, you get the most bang for your buck and create a competitive environment. So that’s probably good news for everybody.

Frieberg: Exactly.

Gardner: We've been discussing how automation governance and capabilities around virtualization can take the sting out of moving toward a strategic level of virtualization adoption. I want to thank our guests. We've had a really interesting discussion with Erik Frieberg, Vice President of Solutions Marketing at HP Software. Thank you, Erik.

Frieberg: Thank you, very much.

Gardner: And also Erik Vogel, Practice Principal and America's lead for cloud resources at HP. Thanks to you also, Erik.

Vogel: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: