Showing posts with label virtual machines. Show all posts
Showing posts with label virtual machines. Show all posts

Thursday, August 16, 2012

Columbia Sportswear Extends Deep Server Virtualization to Improved ERP Operations, Disaster Recovery Efficiencies

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also see how it formed a foundation for improved disaster recovery (DR) best practices.

Stay with us now to learn more about how better systems make for better applications that deliver better business results. Here to share their virtualization journey is Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Welcome, Michael. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Michael Leeper: Good morning, Dana.

Gardner: We’re also here with Suzan Frye, Manager of Systems Engineering at Columbia Sportswear. Welcome to BriefingsDirect, Suzan.

Suzan Frye: Good morning, Dana.

Gardner: Let’s start with you, Michael. Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level? Then we’ll dig down into where that went and what it paid off.

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

As we were working through the requirements of that project with my teams, it became pretty clear for us that virtualization was the way we were going to make that happen. For various reasons, we set off on this path of virtualization for our primary data center, as we were working through issues surrounding multiple data centers and DR processes.

Our technologies weren't based on the physical world any more. We were finding more issues in physical than we were in virtual. So we started down this path to virtualize our entire production world. By that point, mid-2010 had come around, and we were ready to go. We had built our DR stack that virtualized our primary data centers taking us to the 80 percent to 90 percent virtual machine (VM) rate.

Extremely successful


We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

So it wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway. That’s what we were good at, and that’s where teams were staffed and skilled at. What we did was design the platform that we felt was going to meet our corporate standards and really meet our goals. For us that was running ERP on VMware.

Gardner: It sounds as if you had a good rationale for moving into a highly virtualized environment, but that it made it easier for you to do other things. Am I reading too much into it, or would you really say that your migration for ERP was much easier as a result of being highly virtualized?

It wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway.



Leeper: There are a couple of things there. Specifically in the migration to virtualization, we knew we were going to have to go through the effort of moving operating systems from one site to another. We determined that we could do that once on the physical side, relatively easily, and probably the same amount of effort as doing it once by converting physical to virtual.

The problem was that the next time we wanted to move services back from one facility to another in the physical world, we're going to have to do that work again. In the virtual space, we never had to do it again.

To make the teams go through the effort of virtualizing a server to then move it to another data center, we all need to do is do the work once. For my engineers, any time we get them to do the mundane stuff once it's better than doing it multiple times. So we got that effort taken care of in that early phase of the project to virtualize our environments.

For the ERP platform specifically, this was a net new implementation. We were converting from a JD Edwards environment running on IBM big iron to a brand-new SAP stack. We didn’t have anything to migrate. This was really built from scratch.

So we didn’t have to worry about a lot of the legacy configurations or legacy environments that may have been there for us. We got to build it new. And by that point in our journey, virtualized was the only way for us to do it. That’s what we do, it’s how we do it, and that's what we’re good at.

Across the board


Gardner: Just for the benefit of our audience, let’s hear a bit more about Columbia Sportswear. You’re manufacturing, distributing, and retailing. I assume you’re doing an awful lot online. Give us a sense of the business requirements behind your story around virtualization, DR, and ERP.

Leeper: Columbia Sportswear is based in Portland, Oregon. We're the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here.

Gardner: Let’s go to Suzan. Suzan, tell me a little bit about the pace at which you were able to embark on this virtualization journey. I saw some statistics that you went from 25 percent to 75 percent in about eight months which was really impressive, and as Michael pointed out, now over 90 percent. How did you get the pace and what was important in keeping that pace going?

Frye: The only way we could do it was with virtualization and using the efficiencies we gained with that. We centrally manage all of IT and engineering globally out of our headquarters in Portland. When we were given the initial project to move our data center and not only move our data center but provide DR services as well, it was a really easy sell to the business.

We could go to the business and explain to them the benefits of virtualization and what it would mean for their application. They wouldn’t have to rebuild and they wouldn’t have to bring in the vendor or any consultants. We can just take their systems, virtualize them, move them to our new data center, and then provide that automatic DR with Site Recovery Manager (SRM).

We had nine months to move our data center and we basically were all hands on deck, everybody on the server engineering team, storage, and networking teams as well. And we had executive support and sponsorship. It was very easy for us to go to the business market virtualization to the business and start down that path where we were socializing the idea. A lot of people, of course, were dragging their feet a little bit. We all know that story.

Once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us.



But once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us. We went from that 20 percent to 30 percent virtualization. We had about 75 percent when we were in the middle of our DR project, and today we’re actually at around 93 percent.

Gardner: One of the things I hear a lot from people that are doing multiple things with virtualization, like you did, is where to start, how to do this in the right order? Is there anything that you could come back with from your experience on how to do it in the order that incentivizes people to adopt, as you pointed out, but then also allows you to move into these other benefits in a way that compounds the return on investment (ROI)?

Frye: I think it surprises people that we have a "virtualize first" strategy today. Now it’s assumed that your system will be virtual and then all the benefits, the flexibility, the portability, the optimization, and the efficiencies that come with it.

But like most companies, we had to start with some of our lower tier or lower service-level agreement (SLA) systems, our development systems, and start working with the business on getting them to understand some of the benefits that they could gain by working with virtual systems.

Performance is there

Again people are always surprised. Will you have SQL virtualized? Do you have SAP virtualized? And the answer is yes, today we do, and the performance is there, the optimization is there, and that flexibility is there.

If you’re just starting out today, my advice would be to go ahead and start small. Give the business what they want, do it right, and give it the resources it needs to have. Don’t under-promise, over-deliver, and let the business start seeing the efficiencies that they can realize, and some of those hidden efficiencies as well.

We can support DR testing. We can support almost instant data refreshes, cloning, and snapping, so their upgrades are more seamless, and they have an easier back-out plan.

From an engineering and development perspective, we're giving them technologies that they could only dream of four or five years ago. And it’s really benefited the business in that we’re auto-provisioning. We’re provisioning in minutes versus days. We’re granting resources when needed.

It’s a more dynamic process for the business, and we’re really seeing that people are saying, "You’re not just a cost center anymore. You’re enabling us, you’re helping us to do what we need to do and basically doing it on-demand." So our team has really started shining these last few years, especially because of our high virtualization percentage.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it



Leeper: For a company that's looking to move to this virtualization space, they’ve got to get some wins. You’ve got to tackle some environments or some projects that you can be successful at, and hopefully by partnering with some business users and business owners who are willing to take a little bit of a chance.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it. There are a lot of ways that the business, application vendors, and various things can throw some roadblocks in this.

Once you start chipping away at a couple of them and get above the easy stuff, go find one that maybe on paper is a little difficult, but go get that one done. Then you can very quickly point back to success on that piece and start working your way through the rest of them.

Gardner: Yeah, one of those roadblocks that you mentioned I've heard people refer to is issues around licensing and tracking and audits. How did you deal with that? Was that an issue for you when you got into moving onto a virtualized environment?

Leeper: Sure. It’s one of the first things that always comes up. I'm going to separate VMware and the VMware licensing from app and application licensing. On the application side of the house, it’s getting better today than it was two or three years ago when we started this process.

Be confident

You have to be confident in your ability to deal with vendors and demand support on virtualization layers, work with them to help them understand their virtual licensing packages, and be very confident in your ability to get there.

Early on, we had to just look at some vendors straight in the eye and tell them we were going to do this, because this was the best thing for our business, and they needed to figure out how to support us. In some cases, that's just having your team, when you call them support, not have to open with "We’re running this on a VM."

We know we can replicate and then duplicate things in the background when we need to, but sometimes you just have to be smart about how you engage application partners that may not be quite as advanced as we are and work through that.

On the VMware side, it came down to their understanding where our needs were and how to properly license some of the stuff and work through some of those complexities. But it wasn't anything we spent significant amount of time on.

Gardner: You both mentioned this importance of getting the buy-in on the business side and showing wins early, that sort of thing. Because it’s hard many times to put a concrete connection between something that happens in IT and then a business benefit, was there anything that you can think of specifically that benefited your business that you could then turn around and bring back and say, "Well that’s because we did X, Y, and Z with virtualization?"

I had the pleasure of calling the finance VP and informing him that his environments were ready.



Leeper: One of the cool ones we’ve talked about and used for one of our key wins involves our entire architecture obviously with virtualization being key to that.

We had a business unit acquire an SAP module, specifically the BPC for BW module. That was independent of our overall SAP project and they were being run out of a separate business group.

They came to IT in the very late stages of this purchase and said, "These are our needs and requirements," and it was a fairly intense set of equipment. It was multiple servers, multiple environments, kind of up and down the stack, and they were bringing in outside consultants to help them with their implementation.

The interesting thing was, they had spec'd their statement of work (SOW) with these consultants to not start for the 4 to 6 weeks, because they really believed that's how long it was going to take IT to get them their environments and their hardware, using some of their old understanding of IT’s capabilities.

And reality was that we could provide them their test and developement environments that they needed to start with these consultants within a matter of hours, not weeks, and we were able to do so. I had the pleasure of calling the finance VP and informing him that his environments were ready and they were just probably going to sit idle for the next 4-6 weeks until the consultants actually showed up, which surprised all sorts of people.

Add things later


W
e didn't have all their production capacities, but those are things we could add later. They didn’t need production capacity in the first month of the project anyway. So our ability to have that virtualized infrastructure and be able to rapidly deploy to meet business requirements is one of the really kind of cool things we can do these days.

Gardner: Suzan, you’ve mentioned that as an enabler, not a roadblock. So being able to keep up with the speed of business, I suppose, is the best way to characterize this?

Frye: Absolutely. Going back to SRM, another big win for us was, as we were rolling out on some of our Tier 1 mission-critical applications, it was decided by the business that they wanted to test DR. They were going down the path of doing that the old-fashioned way by backing up databases, restoring databases, and taking weeks to do that, days and weeks.

We said, "We think we have a better way with SRM and our replication technologies. We have that data here. Why don't you let us clone that data and stand it up for you?" Literally, within 10 seconds, they had a replica of their data.

So we were enabling them to do their DR testing with SRM, on demand, when they wanted to do that, as well as giving them the benefit of doing the faster cloning and data refreshes. That was just a day-to-day, operational activity that they had no idea we could do for them.

It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins.



It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins. It's going to specific business units and application owners and saying, "We think we have a better way. What do you think about this?" Once they got their hands on it, just looking at their faces was really a good moment for us.

Gardner: Sure, and of course, as an online retailer, having that dependability that DR provides has to be something that lets you sleep a little better at night.

Frye: Just a little bit.

Gardner: Let's talk a little bit about where you go now. Another thing that I often hear in the market is that the benefits of virtualization are ongoing. It's a journey that keeps providing milestones. It doesn't really end.

Do you have any plans around private cloud perhaps, getting more elasticity and fit-for-purpose benefits out of your implementations? Perhaps you're looking to bring other applications into the fold, or maybe you’ve got some other plans around delivering on business applications at lower cost.

So where do you go next with your virtualization payoff?

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization.

We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road.

There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.



There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.

We were looking at how we can possibly move our data out of the country for a period of time, while the infrastructure was stabilizing, specifically power, and then maybe bring it back when things settle down again.

Unfortunately we weren't quite virtual on the edge yet there, but today we think that's something we could do. Thinking about how and where we move data to be at the right place at the right time is where we think the next big win for us.

Then, we get into the application profiles that users are asking for and their ability to spin up environments very quickly to just test something. It lets us get out of having IT as being the roadblock to innovation. A lot of times the business or part of our innovation teams come up with some idea on a concept, an application, or whatever it is. They don't have to wait for IT to fulfill their needs. The environments are right there for them.

So I challenge the teams routinely to think a little bit differently about how we've done things in the past, because our architecture is dramatically different than it was even two years ago.

Gardner: Well, great. We have to leave it there. We've been talking about how outerwear and sportswear maker, Columbia Sportswear has used virtualization technologies and models to improve their business operations. We’ve also seen how better systems makes for better applications that can deliver better business results.

So I’d like to thank our guests for joining this BriefingsDirect podcast. We have been here with Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Thank you so much, Michael.

Leeper: Thank you.

Gardner: And we have been joined by Suzan Frye, Manager of Systems Engineering, also there at Columbia Sportswear. Thanks to you, Suzan.

Frye: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you all audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, November 08, 2011

Case Study: Southwest Airlines' Productivity Takes Off Using Virtualization and IT as a Service

Transcript of a BriefingsDirect podcast on how travel giant Southwest Airlines is using virtualization to streamline customer service applications.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you in conjunction with a recent VMworld 2011 Conference.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of VMware-sponsored BriefingsDirect discussions.

Our next VMware case study interview focuses on Southwest Airlines, one of the best-run companies anywhere, with some 35 straight years of profitability, and how "IT as a service" has been transformative for them in terms of productivity.

Here to tell us more about how Southwest is innovating and adapting with IT as a compelling strategic differentiator is Bob Young, Vice President of Technology and Chief Technology Officer at Southwest Airlines. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Welcome to BriefingsDirect, Bob.

Bob Young: Well, thank you very much. I appreciate the opportunity to speak with you.

Gardner: We have heard a lot about IT as a service, and unfortunately, a lot of companies face an IT organization that might be perceived as a little less than service-oriented, maybe even for some a roadblock or a hurdle. How have you at Southwest been able to keep IT squarely in the role of enablement?

Young: First off, as everybody should know already, Southwest Airlines is the customer service champ in the industry. Taking excellent care of our customers is just as important as filling our planes with fuel. It’s really what makes us go.

So as we are taking a look and trying to be what travelers want in an airline, and we are constantly looking for ways to improve Southwest Airlines and make it better for our customers, that's really where virtualization and IT as a service comes into play. What we want to be able to do is make IT not say, "Oh, this is IT versus something else."

People want to be able to get on Southwest.com, make a reservation, log on to their Rapid Rewards or our Loyalty Program, and they want to be able to do it when they want to do it, when they need to do it, from wherever they are. And it’s just great to be able to provide that service.

We provide that to them at any point in time that they want in a reliable manner. And that's really what it gets right down to -- to make the functions and the solutions that we provide ubiquitous so people don’t really need to think about anything other than, "I need to do this and I can do it now."

At your fingertips

Gardner: I travel quite a bit and it seems to me that things have changed a lot in the last few years. One of the nice things is that information seems to be at your fingertips more than ever. I never seem to be out of the loop now as a traveler. I can find out changes probably as quickly as the folks at the gate.

So how has this transfer of information been possible? How have you been able to keep up with the demands and the expectations of the travelers?

Young: One of the things that we like to do at Southwest Airlines is listen to our customers, listen to what their wants and desires are, and be flexible enough to be able to provide those solutions.

If we talk about information and the flow of information through applications and services, it really starts to segment the core technical aspects of that so the customer and our employees don’t really need to think about it. When they want to get the flight at the gates, the passenger is on a flight leg, etc., they can go ahead and get that at any moment in time.

Another good example of that is earlier this year we rolled out our new Rapid Rewards 2.0 program. It represents a bold and leading way to look at rewards and giving customers what they want. With this program, we've been able to make it such that we can make any seat available on any flight for our Rapid Rewards customers for rewards booking, which is unique in the industry.

What we want to be able to do is provide it whenever they want it, whenever they need it, at the right cost point, and to meet their needs.



The other thing it does is allows our current and potential members the flexibility in how they both earn miles and points and how they use them for rewards -- being able to plan ahead and allowing them to save some significant points.

The same is true of how we provide IT as a service. What we want to be able to do is provide it whenever they want it, whenever they need it, at the right cost point, and to meet their needs. We've got some of the best customers in the world and they like to do things for themselves. We want to allow them to do that for themselves and be able to provide our employees the same areas.

If you've been on a Southwest flight, you've seen our flight crews, our in-flight team, really trying to have fun and trying to make Southwest a fun place to work and to be, and we just want to continue to support that in a number of different ways.

Gardner: You have also had some very significant challenges. You're growing rapidly. Your Southwest.com website is a crucial and growing part of your revenue stream. You've had mergers, acquisitions, and integrations as a result of that, and as we just discussed, the expectations of your consumers, your customers, are ramping up as well -- more data, more mobile devices, more ways to intercept the business processes that support your overall products and services.

So with all those requirements, tell me a little bit about the how. How in IT have you been able to create common infrastructures, reduce redundancy, and then yet still ramp up to meet these challenging requirements?

Significant volume

Young: As you all know, Southwest.com is a very large travel site, one of the largest in the industry -- not just airlines, but the travel industry as a whole. Over 80 percent of our customers and consumers book travel directly on Southwest.com. As you may know, we have fare sales a couple of times a year, and that can drive a significant volume.

What we've been able to do and how we have been able to meet some of those challenges is through a number of different VMware products. One of the core products is VMware itself, if we talk about vSphere, vMotion, etc., to be able to provide that virtualization. You can get a 1-to-10 virtualization depending on which type of servers and blades you're using, which helps us on the infrastructure side of the house to maintain that and have the storage, physical, and electrical capacity in our data centers.

But it also allows us, as we're moving, consolidating, and expanding these different data centers, to be able to move that virtual machine (VM) seamlessly between points. Then, it doesn’t matter where it’s running.

That allows us the capacity. So if we have a fare sale and I need to add capacity on some of our services, it gives our us and our team that run the infrastructure the ability to bring up new services on new VMs seamlessly. It plugs right into how we're doing things, so that internal cloud allows us not to experience blips.

It's been a great add for us from a capacity management perspective and being able to get the right capacity, with the right applications, at the right time. It allows us to manage that in such a way that it’s transparent to our end-users so they don’t notice any of this is going on in the background, and the experience is not different.

It's been a great add for us from a capacity management perspective and being able to get the right capacity, with the right applications, at the right time.



Gardner: I understand that you're at a fairly high level of virtualization. Is that a place where you plan to stay? Are you going to pursue higher levels? Where do you expect to go with that?

Young: I'll give you a little bit of background. We started our virtualized environments about 18 months ago. We went from a very small amount of virtualization to what we coined our Server 2.0 strategy, which was really the combination of commodity-based hardware blades with VMware on that.

And that allowed us last year in the first and second quarter to grow from several hundred VMs to over several thousand, which is where we're at today in the production environment. If you talk about production, development, and test, production is just one of those environments.

It has allowed us to scale that very rapidly without having to add a thousand physical servers. And it has been a tremendous benefit for us in managing our power, space, and cooling in the data center, along with allowing our engineers who are doing the day-to-day work to have a single way to manage it, deploy, and move stuff around even more automatically. They don’t have to mess with that anymore, VMware just takes care of the different products that are part of the VMware Suite.

Gardner: And your confidence, has it risen to the level where you're looking at 70, 80, 90, even more percent of virtualization? How do you expect to end that journey?

Ready for the evolution

Young: I would love to be at 100 percent virtualized. That would be fantastic. I think unfortunately we still have some manufacturers and software vendors -- and we call them vendors, because typically we don’t say partners -- who decide they are not going to support their software running in the virtualized environment. That can create problems, especially when you need to keep some of those systems up 24 x 7, 365, with 99.95 percent availability.

We're hoping that changes, but the goal would be to move as much as we can, because if I take a look at virtualization, we are kind of our internal private cloud. What that’s really doing is getting us ready for the evolution that’s going to happen over the next, 5, 7, or 10 years, where you may have applications and data deployed out in a cloud, a virtual private cloud, public cloud if the security becomes good enough, where you've got to bring all that stuff together.

If you need to have huge amounts of capacity and two applications are not co-located that need to talk back and forth, you've got to be much more efficient on the calls and the communications and make that seamless for the customer.

This is giving us the platform to start learning more and start developing those solutions that don’t need to be collocated in a data center or in one or two data centers, but can really be pushed wherever it makes sense. That could be from wherever the most efficient data center is from a green technology perspective, use the least electricity and cooling power, to alternate energy, to what makes sense at the time of the year.

That is a huge add and a huge win for us in the IT community to be able to start utilizing some of that virtualization and even across physical locations.

It allows us to deploy that stuff within minutes, whereas it used to take engineers manually going to configure each thing separately. That’s been a huge savings.



Gardner: So as you've ramped up on your virtualization, I imagine you have been able to enjoy some benefits from that in terms of capital expense, hardware, and energy. How about in some of the areas around configuration management and policy management. Is there a centralization feature to this that also is paying dividends?

Young: That’s a huge cornerstone of the suite of tools that we've been able to get through VMware is being able to deploy custom solutions and even some of the off-the-shelf solutions on a standard platform, standard operating systems, standard configurations, standard containers for the web, etc. It allows us to deploy that stuff within minutes, whereas it used to take engineers manually going to configure each thing separately. That’s been a huge savings.

The other thing is, once you get the configuration right and you have it automated, you don’t have to worry about people taking some human missteps. Those are going to happen, and you've got to go back and redo something. That elimination of error and the speed at which we can do that is helping. As you expand your server footprints and the number of VMs and servers you have without having to add to your staff, you can actually do more with the same number of or fewer staff.

Gardner: I wonder how you feel about desktop virtualization. Another feature that we've seen in the field in the marketplace is those that make good use of server virtualization are in a better position to then take that a step further and extend it out through PC-over-IP and other approaches to delivering the whole desktop experience. Is that something that you're involved with or experimenting with? How do you feel about that?

Young: This has been going on and off in the IT industry for the past 10-15 years, if you talk about Net PCs and some of the other things. What’s really driven us to take a look at it is that around our environment we can control security on virtual desktops, etc., very clearly, very quickly and deliver that in a great service.

New mobile devices

The other thing that’s leading to this is, not just what we talked about in security, is the plethora of brand new mobile devices -- iPhones, iPads, Android devices, Galaxy. HP has a new device. RIM has a new device. We need to be able to deliver our services in a more ubiquitous manner. The virtual desktop allows us to go ahead and deliver some of those where I don’t need to control the hardware. I just control the interface, which can protect our systems virtually, and it’s really pretty neat.

I was on one of my devices the other day and was able to go in via virtual desktop that was set up to be able to use some of the core systems without having all that stuff loaded on my machine, and that was via the Internet. So it worked out phenomenally well.

Now, there are some issues that you have to do depending on whether you're doing collocation and facility, but you can easily get through some of that with the right virtualization setup and networking.

Gardner: So you have come an awfully long way. You say 18 months ago you were only embarking on virtualization, but now you're already talking about hybrid clouds and mobile enablement and wide area network optimization. How is that you have been able to bite off so much so soon? A lot of people would be intimidated, do more of that crawl-walk-run, with the emphasis on the crawl and walk parts?

Young: Well, I am very fortunate. I might come up with the vision of where we want to go and this is where IT is going, and I am very fortunate to have some very good and phenomenal engineers working on this, working through it, all the issues, all the little challenges that pop up along the way in order to do this.

It’s what our team would say is pretty cool technology, and it gets them excited about doing something new and different as well. I've got a couple of managers -- Tim Pilson, Mitch Mitchell -- and their teams, and some really good people.

So I really have to give credit to the teams that are working with me, my team who gets it done, and VMware for providing such a great product.



Jason Norman is one of the people, and Doug Rowland also has been very involved with getting this rolled out. It’s amazing what a core set of just a few people can do with the right technology, the right attitude, and passion to get it done. I've just been very impressed with their, what we call warrior spirit here at Southwest Airlines -- just not giving up, doing what it takes to get it done, and being able to utilize that with some of the VMware products.

It extends beyond that team. Our development teams use Spring and some other of the VMware products as well. If we run into an issue, it’s just like VMware on the development side of the house and product side of the house is really part of our extended team. They take it, they listen, and they come back with a fix and a patch in literally a day or two, rather than some other vendors with whom you might wait weeks or months and it might never make it to you.

So I really have to give credit to the teams that are working with me, my team who gets it done, and VMware for providing such a great product that the engineers want to use it, can use it, and can understand it, and make huge amounts of progress in a very short period of time.

Gardner: Well, great. It’s a very interesting and compelling story. We've been talking with Southwest Airlines and how they are continuing to innovate and adapt and using IT as a compelling strategic differentiator.

Our guest has been Bob Young, Vice President of Technology and Chief Technology Officer at Southwest Airlines. Thanks so much, Bob.

Young: Well, thank you.

Gardner: I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how travel giant Southwest Airlines is using virtualization to streamline customer service applications. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Sunday, March 29, 2009

HP Advises Strategic View of Virtualization So Enterprises Can Dramatically Cut Costs, Gain Efficiency and Usher in Cloud Benefits

Transcript of a BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the business case and economic rationale for virtualization implementation and best practices.

****Access More HP Resources on Virtualization****

Virtualization has become more attractive to enterprises as they seek to better manage their resources, cut total costs, reduce energy consumption, and improve the agility of their data centers and IT operations. But, virtualization is more than just installing hypervisors. The effects and impacts of virtualization cut across many aspects of IT operations, and the complexity of managing virtualization IT runtime environments can easily slip out of control.

In this podcast, we're going to examine how virtualization can be applied as a larger process and a managed IT undertaking with sufficient tools for governance that allow for rapid, but reasoned, virtualization adoption. We'll show how the proper level of planning and management can more directly assure a substantive economic return on the investments enterprises are making through virtualization.

The goal is to do virtualization right and to be able to scale the use of virtualization in terms of numbers of instances. We also want to extend virtualization from hardware to infrastructure, data, and application support, all with security, control, visibility, and lower risk, and while also helping to make the financial rationale ironclad.

To help provide an in-depth look at how virtualization best practices make for the best economic outcome we're joined by Bob Meyer, the worldwide virtualization lead in Hewlett-Packard’s (HP) Technology Solutions Group (TSG). Welcome to the show, Bob.

Bob Meyer: Thank you very much, Dana.

Gardner: Virtualization is really becoming quite prominent, and we're even seeing instances now where the tough economic climate is accelerating the use and adoption of virtualization. This, of course, presents a number of challenges.

First, could you provide some insight, from HP’s perspective, of how you see virtualization being used in the market now, and how that perhaps has shifted over the past six months or so?

Meyer: When we talk about virtualization -- obviously it’s been around for quite a long time -- it's typically the virtualization of Windows Servers where people start to think about it. For a couple of years now, that’s been the hot value proposition within IT.

The allure there is that when you consider the percentage of budget spent on data center facilities, hardware, and IT operations management, virtualization can have a profound effect on all of these areas.

Moving off the fence

For the last couple of years, people have realized the value in terms of how it can help consolidate servers or how it can help do such things as backup and recovery faster. But, now with the economy taking a turn for the worse, anyone who was on the fence, who wasn’t sure, who didn’t have a lot of experience with it, is now rushing headlong into virtualization. They realize that it touches so many areas of their budget, it just seems to be a logical thing to do in order for them to survive these economic times and come out a leaner, more efficient IT organization.

The change that we see is that previously virtualization was for very targeted use and now it’s gone to virtualization everywhere, for everything -- "How much can I put in and how fast can I put it in."

Gardner: When you move from a tactical orientation to exploit virtualization at this more strategic level, that requires different planning and different methodologies. Tell us what that sort of shift should mean.

Meyer: To be clear, we're not just talking about virtualization of servers. We're talking about virtualizing your infrastructure -- servers, storage, network, and even clients on the desktop. People talk about going headlong into virtualization. It has the potential to change everything within IT and the way IT provides services.

The potential is that you can move your infrastructure around much faster. You can provision a new server in minutes, as opposed to a few days. You can move a virtual machine (VM) from one server to another much faster than you could before.

When you move that into a production environment, if you're talking about it from a services context, a server usually has storage attached to it. It has an IP address, and just because you can move the server around faster doesn’t mean that the IP address gets provisioned any faster or the storage gets attached any faster.

So, when you start moving headlong into virtualization in a production environment, you have to realize that now these are part of services. The business can be affected negatively, if the virtualized infrastructure is managed incompletely or managed outside the norms that you have set up for best practices.

Gardner: I guess it also makes sense that the traditional IT systems-management approaches also need to adjust. If you had standalone stacks, each application with its own underlying platform, physical server, and directly attached data and little bits of middleware for integration, you had a certain setup for managing that. What’s different about managing the virtualized environments, as you are describing them?

Meyer: There are a couple of challenges. First of all, one of the blessings of virtualization is its speed. That’s also a curse in this case, because in traditional IT environments, you set up things like a change advisory board and, if you did a change to a server, if you moved it, if you had to move to a new network segment, or if you had to change storage, you would put it through a change advisory board. There were procedures and processes that people followed and received approvals.

In virtualization, because it’s so easy to move things around and it can be done so quickly, the tendency is for people to say, "Okay, I'm going to ignore that best practice, that governance, and I am going to just do what I do best, which is move the server around quickly and move the storage around." That’s starting to cause all sorts of IT issues.

The other issue is not just the mobility of the infrastructure, but also the visibility of that infrastructure. A lot of the tools that many people have in place today can manage either physical or virtual environments, but not both. What you're heading for when that’s the case is setting up dual management structures. That’s never good for IT. You're just heading for service outages and disruptions when you go in that direction.

Gardner: It sounds like some safeguards are required for managing and allowing automation to do what it does well, but without it spinning out of control and people firing off instances of applications and getting into some significant waste or under-utilization, when in fact that’s what you are trying to avoid.

Shifting the cost

Meyer: Certainly. A lot of what we're seeing is the initial gains of virtualization. People came in and they saw these initial gains in server consolidation. They went from, let’s say, 12 physical boxes down to one physical box with 12 virtual servers. The initial gains get wiped out after a while, and people push the cost from hardware to management, because it becomes harder to manage these dual infrastructures.

Typically, big IT projects get a lot of the visibility. The initial virtualization projects probably get handled with improper procedures. As you come back to day-to-day operations of the virtualized environment, that’s where you start to lose the headway that you gained originally.

That might be from non-optimized infrastructure that is not made to move as fast or to be instrumented as fast as virtualization allows it to be. It could be from management tools that don’t support virtual and physical environments, as we mentioned before. It can even be governance. It can be the will of the IT organization to make sure that they adopt standards that they have in place in this new world of moving and changing environments.

Gardner: For a lot of organizations, with many IT aspects or approaches these days, security and compliance need to be brought into the picture. What does this flexible virtualization capability mean, if you're in a business that has strict compliance and security oversights?

Meyer: Again, it produces its own set of challenges for the reasons similar to what we talked about before. Compliance has many different facets. If you have a service infrastructure that’s in compliance today in a physical environment, it might take days to move that around, and to change the components. People are likely to have much more visibility. That window of change tends to take a lot longer.

With virtualization, because of the speed, the mobility, and the ease of moving things around, things can come out of compliance faster. They could be out of regulatory compliance. They could be out of license compliance, because it’s much easier to spin up new instances of virtual machines and much harder to track them.

So, the same blessing of speed and mobility and ease of instrumentation can take a hit on the compliance and security side as well. It’s harder to keep up with patches. A lot of people do virtual machines through images. They'll create a virtual machine image, and once that image is created, that becomes a static image. You deploy it on one VM and then another and then another. Over time, patches come out, and those patches might not be deployed to that particular image. People are starting to see problems there as well.

Gardner: Just to throw another log on the fire of why this is a complex undertaking, we're probably going to be dealing with hybrid environments, where we have multiple technologies, and multiple types of hypervisors. As you pointed out, the use of virtualization is creeping up beyond servers, through infrastructure storage, and so forth. What’s the hurdle, when it comes to having these mixed and hybrid environments?

Mixed environments are the future

Meyer: That’s a reality that we are going to be dealing with from here on out. Everybody will have a mix of virtual and physical environments. That’s not a technology fad. That’s just a fact. There will be services -- cloud computing, for example -- that will extend that model.

The reality is that the world we live in is both physical and virtual, when it comes to that infrastructure. To have to start looking at it from that perspective, you have to start asking, "Do I have the right solutions in place from an infrastructure perspective, from a management perspective, and from a process perspective to accommodate both environments?"

The danger is having parallel management structures within IT. It does no one any good. If you look at it as a means to an end, which virtualization is, the end of all this is more agile and cost-effective services and more agile and cost-effective use of infrastructure.

Just putting a hypervisor on a machine doesn’t necessarily get you virtualization returns. It allows you to virtualize, but it has to be put on the construct of what you're trying to do. You're trying to provide IT-enabled services for the business at better economies of scale, better agility, and low risk, and that’s the construct that we have to look at.

Gardner: So, if we have a strategic requirement set to prevent some of these blind alleys and pitfalls, then we need to have a strategic process and management overview. This is something that cuts across hardware, software, management, professional conduct and culture, and organization. How do you get started? How do you get to the right level of doing this with that sort of completeness in mind?

Meyer: That’s the problem in a nutshell right there. The way virtualization tends to come in is unique, because it's a revolutionary technology that has the potential to change everything. But, because of the way it comes in, people tend to look at it from a bottom-up perspective. They tend to look at it from, "I have this hypervisor. This hypervisor enables me to do virtual machines. I will manage the hypervisor and the virtual machines, differently than other technologies."

Service-oriented architecture (SOA) and Web services aren't able to creep into an IT environment. They have to come from a top-down perspective. At least somebody has to mandate that they would implement this architecture. So, there's more of a strategy involved.

When we look back at virtualization, the technology is no different than other technologies in the sense that it has to be managed from a strategic perspective. You have to take that top-down look and say, "What does this do for me and for the business?"

At HP, this is where organizations come to us and say, "We have virtualization in our test and development environment, and we are looking to move it into production. What’s the best way to do that?" We come in and assess what they are looking to do, help them roll that up into what’s the bigger picture, what are they trying to get out of this today, and what do they want to get out of this a year from now.

We map out what technologies are in place, how to mix that, how to take the hypervisor environment and make that part of the overall operational management structure, before they move that into the operational environment.

If somebody's already using it and has a number of applications or services they're ready to virtualize, they're already experiencing some of the pain. So, that’s a little bit more prescriptive. Somebody will come in and say, "I'm experiencing this. I'm seeing my management cost rise." Or, "When a service goes down, it’s harder for me to pinpoint where it is, because my infrastructure is more complex."

This is where typically we'll have a spot engagement that then leads to a broader conversation to say, "Let’s fix your pain today, but let’s look at it in the broader context of a service." We have a set of services to do that.

There's a third alternative as well. Sometimes people come to us. They realize the value of virtualization, but they also realize that they don’t have the expertise in house or they don’t have the time to develop that longer-term strategy for themselves. They can also come to HP for outsourcing that virtual and physical environment.

Gardner: It sounds as if the strategic approach to virtualization is similar to what we've encountered in the past, when we've adopted new technologies. We have had to take the same approach of let’s not go just bottom up. Let’s look strategically. Can you offer some examples of how this compares to earlier IT initiatives and how taking that solution approach turned out to be the best cost-benefit approach?

Potential to change everything

Meyer: As an example from an earlier technology perhaps, I always look at client-server computing. When that came out, it had the potential to change everything. If you look at computing today, client-server really did change the way that applications and services were provided.

If you look at the nature of that technology, it required rewriting code and complete architectures. The nature of the technology lent itself to have that strategic view. It was deployed and, over time, a lot of the applications that people were using went to client-server and tier architecture. But, that was because the technology lent itself to that.

Virtualization, in that sense, is not very different. It is a game changer from a top-down perspective. The value you get when you take that top-down perspective is that you have the time to understand that, for example, "I have a set of management tools in place that allow me to monitor my servers, my storage, my network from a service perspective, and they will let me know whether my end users are getting the transaction rates they need on their Web services."

Gardner: Let me just explore that a little bit more. Back when client-server arrived, it wasn’t simply a matter of installing the application on the server and then installing the client on the PCs. Suddenly, there were impacts on the network. Then, there were impacts on the size of the server and capabilities in maintaining simultaneous connections, which required a different approach to the platform.

Then, of course, there was a need for extending this out to branch offices and for wider area networks to be involved. That had a whole other set of issues about performance across the wide area network, the speed of the network, and so on -- a ripple effect. Is that what we're seeing as well with virtualization?

Meyer: We do, absolutely. With the bottom-up approach, people look at it from a hypervisor and a server perspective. But, it really does touch everything that you do, and that everything is not just from a hardware perspective. It not only touches the server itself or the links between the server, the storage, and the network, but it also touches the management infrastructure and the client infrastructure.

So, even though it’s easier to deploy and it can seep in, it touches just about everything. That’s why we keep coming back to this notion of saying that you need to take a strategic look at it, because the more you deploy, the more it will have that ripple effect, as you call it, on all the other systems within IT, and not just a server and hypervisor.

Gardner: Tell us about HP’s history with virtualization. How long has HP been involved with it, and what’s its place and role in the market right now?

Meyer: HP has been doing virtualization for a long time. When most people think of virtualization, they tend to think of hypervisors and they tend to think of it on x86 or Windows servers. That’s really where it has caused this to become popular. But HP has had virtualization in it for quite a while, and we've been doing virtualization on networks for quite a while. So, we are not newcomers to the game.

When it comes to where we play today, there are companies that are experts on the x86 world, and they're providing hypervisors. VMware, Citrix, and Microsoft are really good at what they do. HP doesn’t intend to do that.

Well-managed infrastructure

What we intend to do is take that hypervisor and make sure that it's part of a well-managed infrastructure, a well-managed service, well-managed desktops, and bringing virtualization into the IT ecosystem, making it part of your day-to-day management fabric.

That’s what we do with hardware that’s optimized out of the box for virtualization. You can wire your hardware once and, as you move your virtual components around, the hardware can take care of the rewiring, the IP network, the IP address, and the storage.

We handle that with IT operations and management offerings that have one solution to heterogeneously manage virtual and physical environments. We do that with client architecture, so that you can extend virtualization onto the desktops, secure the desktops, and take a lot of the cost out of managing them. If you look at what HP is about, it’s taking that hypervisor and actually delivering business value out of a virtual environment.

Gardner: Of course, HP is also in the server hardware business. Does that provide you a benefit in virtualization? Some conventional thinking might be, well gee, why would the hardware people want to increase utilization? Aren’t they in the business of selling more standalone servers?

Meyer: Certainly, we're in the business of selling hardware as well, but the benefit comes in many different areas. Actually, more people today are running virtualization on HP servers than any other platform out there. So, virtualization is an area that allows us to be more creative and more innovative in a server environment.

One of the hottest areas right now in server growth is in blade servers, where you have a bladed enclosure that’s made specifically for virtualization. It allows you to lower the cost of power and cooling, lower the floor space of the data center, and move your virtual components around much faster. Where we might see utilization rates decline in some areas, we're certainly seeing the uptake in others. So, it’s certainly an opportunity for us.

Gardner: So, helping your clients cut the total cost of computing is what’s going to keep you in the hardware business in the long run?

Meyer: That’s exactly right. If you look at the overall benefits, the immediate allure of virtualization is all about the cost and the agility of the service. If you look at it from the bigger picture, if you get virtualization right, and you get it right from a strategic perspective, that’s when you start to feel those gains that we were talking about.

Data centers are very expensive. There's floor space in there. Power and cooling are very expensive. People are talking about that. If we help them get that right and knock the cost out of the infrastructure, the management, the client architectures, and even insourcing or outsourcing, that’s beneficial to everyone.

What are the payoffs?

Gardner: We've talked about how virtualization is a big deal in the market and how it’s being driven by economic factors. We've looked at how a tactical knee-jerk approach can lead to the opposite affect of higher expense and more complexity. We've recognized that taking an experienced, methodological, strategic approach makes a lot of sense.

Now, what is it that we can get, if we do this right? What are the payoffs? Do you have examples of companies you work with, or perhaps within HP itself? I know you guys have done an awful lot in the past several years to refine and improve your IT spend and efficiency. What are the payoffs if you do this right?

Meyer: There are a number of areas. You can look at it in terms of power and cooling. So right off the bat, you can save 50 percent of your power and cooling, if you get this right and get an infrastructure that works together.

From a client-computing perspective, you can save 30 percent off the cost of client computing, off the management of your client endpoints, if you virtualize the infrastructure.

If you look at outsourcing the infrastructure, the returns are manifold there, because you're really taking not just the cost of running it. You're actually leveraging the combined knowledge of thousands and thousands of people who understand how to run the infrastructure from the experience they have of doing multiple outsourcing.

So, we see particular gains in power and cooling, as I mentioned before, and the cost of administration. We'll see significant gains in server-admin ratios. We'll see a threefold increase in the number of servers that people can manage.

If you look across the specific examples, they really do touch a lot of the core areas that people are looking at today -- power and cooling, the cost of maintaining and instrumenting that infrastructure, and the cost of maintaining desktops.

Gardner: Doesn’t this help too, if you have multiple data centers and you're trying to whittle that down to a more efficient, smaller number? Does virtualization have a role in that?

The next generation

Meyer: Absolutely. Actually, throughout the data center, virtualization is one of those key technologies that help you get to that next generation of the consolidated data center. If you just look at from a consolidation standpoint, a couple of years ago, people were happy to be consolidating five servers into one or six servers into one. When you get this right, do it on the right hardware with the right services setup, 32 to 1 is not uncommon -- a 32-to-1 consolidation rate.

If you think about what that equates to, that’s 32 fewer physical servers, less floor space, less power and cooling. So, when you get it right, you go from, "Yes, I can consolidate and I can consolidate it five to one, six to one or 12 to one" to "I'm consolidating, and I am really having a big impact on the business, because I'm consolidating at 24 to 1 or 32 to 1 ratios." That’s really where the payoff starts coming in.

Gardner: I suppose that while you are consolidating, you might as well look at what applications on which platforms are going to be sunset. So, there's a modernization impact. Virtualization helps you move certain apps out to pasture, maybe reusing the logic and the data in the future. What’s the modernization impact that virtualization can provide?

Meyer: Virtualization is absolutely an enabler of that in a number of different ways. Sometimes, when people are modernizing apps, they go to our outsourcing business and say, "I'm modernizing an application and I need some compute capacity. Do you have it?" They can tap into our compute capacity in a virtual way to provide a service, while they're moving, updating, or modernizing an architecture, and the end user doesn’t notice the difference. There's a continuity aspect there, as they provide the application.

There are also the backup and recovery aspects of it. There are a lot of safeguards that come in while you are modernizing applications. In this case, virtualization is an enabler for that. It allows that move to happen. Then, as that application moves onto more up-to-date or more modern architecture, it allows you to quickly scale up or scale down the capacity of that application. Again, the end user experience isn't diminished.

Gardner: So, these days when we are not just dealing with the dollars-and-cents impacts of the economy, we are also looking at dynamic business environments, where there are mergers, acquisitions, bankruptcies, and certain departments being sloughed off, sold, or liquidated. It sounds like the strategic approach to virtualization has a business outcome in that environment too.

Meyer: That’s really where the sort of the flip side of virtualization comes in -- the automation side. Virtualization allows you to quickly spin up capacity and do a series of other things, but automation allows you to do that at scale.

If you have a business that needs to change seasonally, daily, weekly, or at certain times, you need to make much more effective use of that compute capacity. We talk a lot about cost, but it’s automation that makes it cost effective and agile at the same time. It allows you to take a prescribed set of tasks related to virtualization, whether that’s moving a workload, updating a new service, or updating an entire stack and make that happen much faster and at much lower cost, as well.

Gardner: One last area, Bob. I want to get into the benefits of managed virtualization as insurance for the future. You mentioned cloud computing a little earlier. If you do this properly, you start moving toward what we call on-premises or private clouds. You create a fabric of storage, or a fabric of application support, or a fabric of platform infrastructure support. That’s where we get into some of those even larger economic benefits.

This is a vision for many people now, but doing virtualization right seems to me like a precursor to being able to move toward that. You might even be able to start employing SOA more liberally, and then take advantage of external clouds, and there is a whole vision around that. Am I correct in assuming that virtualization is an initial pillar to manage, before you're able to start realizing any of that vision?

Meyer: Certainly. The focus right now is, "How does it save me money?" But, the longer-term benefit, the added benefit, is that, at some point the economy will turn better, as it always does. That will allow you to expand your services and really look at some of the newer ways to offer services. We mentioned cloud computing before. It will be about coming out of this downturn more agile, more adaptable, and more optimized.

No matter where your services are going -- whether you're going to look at cloud computing or enacting SOA now or in the near future -- it has that longer term benefit of saying, "It helps me now, but it really sets me up for success later."

We fundamentally believe, and CIOs have told us a number of times that virtualization will set them up for long-term success. They believe it’s one of those fundamental technologies that will separate their company as winners going into any economic upturn.

Gardner: So, making virtualization a core competency, sooner rather than later, puts you at an advantage across a number of levels, but also over a longer period of time?

Meyer: Yes. Right now everybody is reacting to an economic climate. Those CIOs who are acting with foresight, looking ahead and saying, "Where will this take me," are the ones who are going to be successful as opposed to the people who are just reacting to the current environment and looking to cut and slash. Virtualization has a couple of benefits that allow you to save and optimize, but also sets you up for that -- to boomerang you whenever the economic recovery comes.

Gardner: Well, great. We've been talking with Bob Meyer, the worldwide virtualization lead in HP’s Technology Solutions Group. We've been examining the effects and impacts of virtualization adoption and how to produce the best businesses and financial outcomes from your virtualization initiatives. I want to thank you, Bob, for joining us. It's been a very interesting discussion.

Meyer: Thank you for the opportunity.

****Access More HP Resources on Virtualization****

Gardner: We also want to thank our sponsor, Hewlett-Packard, for supporting this series of podcasts. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to BriefingsDirect. Thanks and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Friday, November 14, 2008

Interview: rPath’s Billy Marshall on How Enterprises Can Follow a Practical Path to Virtualized Applications

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on proper on-ramps to cloud computing, and how enterprises can best prepare to bring applications into a virtual development and deployment environment.

While much has been said about cloud computing in 2008, the use of virtualization is ramping up rapidly. Moreover, enterprises are moving from infrastructure virtualization to application-level virtualization.

We're going to look at how definition and enforcement of policies helps ensure conformance and consistency for virtual applications across their lifecycle. Managing virtualized applications holistically is an essential ingredient in making cloud-computing approaches as productive as possible while avoiding risk and onerous complexity.

To provide the full story on virtual applications lifecycle, methods and benefits, I'm joined by Billy Marshall, the founder of rPath, as well as their chief strategy officer. Welcome to the show, Billy.

Billy Marshall: Thanks, Dana, great to be here.

Gardner: There is a great deal going on with technology trends, the ramp up of virtualization, cloud computing, services-oriented architecture (SOA), use of new tools, light-weight development environments, and so forth. We're also faced unfortunately with a tough economic climate, as a global recession appears to be developing.

What’s been interesting for me is that this whole technological trend-shift and this economic imperative really form a catalyst to a transformative IT phase that we are entering. That is to say, the opportunity to do more with less is really right on the top of the list for IT decision-makers and architects.

Tell me, if you would, how some of these technology benefits and the need to heighten productivity fit and come together.

Marshall: Dana, we've seen this before, and specifically I have seen it before. I inherited the North America sales role at Red Hat in April of 2001, and of course shortly thereafter, in September of 2001, we had the terrible 9/11 situation that changed a lot of the thinking.

The dot-com bubble burst, and it turned out to be a catalyst for driving Linux into a lot of enterprises that previously weren't thinking about it before. They began to question their assumptions about how much they were willing to pay for certain types of technologies, and in this case it happened to be the Unix technology. In most cases they were buying from Sun and that became subject of a great deal of scrutiny. Much of it was replaced in the period from 2001to 2003 and into 2004 with Linux technology.

We're once again facing a similar situation now, where people, enterprises specifically, are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. I don't think there's any doubt in people's mind that they are getting good value out of doing things with IT, and a lot of these businesses are driven by information technology.

At the same time, this credit crunch is going to have folks look very hard at large-scale outlays of capital for data centers. I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud, as it relates to being able to get it when you need it, get it at variable cost, and get it on demand.

Gardner: Obviously, there's a tremendous amount of economic value to be had in cloud computing, but some significant risks as well. As we look at how virtualization increases utilization of servers and provide the dynamic ability to fire up platform and instances of run-time and actual applications with a stack beneath them, it really allows companies to increase their applications with a lower capital expenditure upfront and also cut their operating costs. Then, we can have administrators and architects managing many more applications, if it's automated and governed properly. So let's get into this notion of doing it right.

When we have more and more applications and services, there is, on one side, a complexity problem. There is also this huge utilization benefit. What's the first step in getting this right in terms of a lifecycle and a governance mentality?

Marshall: Let's talk first about why utilization was a problem without virtualization. Let's talk about old architecture for a minute, and then we can talk about, what might be the benefits of a new architecture if done correctly.

Historically, in the enterprise you would get somewhere between 15 and 18 percent utilization for server applications. So, there are lots of cycles available on a machine and you may have two machines running side-by-side, running two very different workloads, whose cycles are very different. Yet, people wouldn't run multiple applications on the same server setup in most cases, because of the lack of isolation when you are sharing processes in the operating system on the server. Very often, these things would conflict with one another.

During maintenance, maintenance required for one would conflict with the other. It's just a very challenging architecture to try to run multiple things on the same physical, logical host. Virtualization provides isolation for applications running their own logical server, their own virtual server.

So, you could put multiples of them on the same physical host and you get much higher utilization. You'll see folks getting on the order of 50, 70, or 80 percent utilization without any of the worries about the conflicts that used to arise when you tried to run multiple applications sharing processes on the same physical host with an operating system.

That's the architecture we're evolving towards, but if you think about it, Dana, what virtualization gives you from a business perspective, other than utilization is an opportunity to decouple the definition of the application from the system that it runs on.

Historically, you would install an application onto the physical host with the operating system on it. Then, you would work with it and massage it to get it right for that application. Now, you can do all that work independent of the physical host, and then, at run-time, you can decide where you have capacity that best meets needs of the profile of this application.

Most folks have simply gone down the road of creating a virtual machine (VM) with their typical, physical-host approach, and then doing a snapshot, saying, "Okay, now I worry about where to deploy this."

In many cases, they get locked into the hypervisor or the type of virtualization they may have done for that application. If they were to back up one or two steps and say, “Boy, this really does give me an opportunity to define this application in a way that if I wanted to run it on Amazon's EC2, I probably could, but I could also run it my own data center.”

Now, I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own datacenter, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.

Gardner: I see. So, we need to rethink the application, so that we can run that application on a variety of these new sourcing options that have arisen, be they on premises, off premises, or perhaps with a hybrid.

Marshall: I think it will be a hybrid, Dana. I think for very small companies, who don't even have the capital option of putting up a data center, they will go straight to an on-demand cloud-type approach. But, for the enterprise that is going to be invested in the data center anyway at some level, they simply get an opportunity to right-size that infrastructure, based upon the profile of applications that really need to be run internally, whether for security, latency, data-sensitivity, or whatever reason.

But, they'll have the option for things that are portable, as it relates to their security, performance, profile, as it relates to the nature of the workload, to make them portable. We saw this very same thing with Linux adoption post 9/11. The things that could be moved off of Solaris easily were moved off. Some things were hard to move, and they didn't move them. It didn't make sense to move them, because it cost too much to move them.

I think we're going to see the same sort of hybrid approach take hold. Enterprise folks will say, “Look, why do I need to own the servers associated with doing the monthly analysis of the log files associated with access to this database for a compliance reason. And, then the rest of the month, that server just sits idle. "Why do I want to do that for that type of workload? Why do I want to own the capacity and have it be captive for that type of workload?"

That would be a perfect example of a workload where it says, I am going to crunch those logs once a month up on Amazon or Rackspace or some place like that, and I am going to pay for a day-and-a-half of capacity and then I am going to turn it off.

Gardner: So, there's going to be a decision process inside each organization, probably quite specific to each organization, about which applications should be hosted in which ways. That might include internal and external sourcing options. But, to be able to do that you have to approach these applications thoughtfully, and you also have to create your new applications. With this multi-dimensional hosting possibility set, if you will, it might. What steps need to be taken at the application level for both the existing and the newer apps?

Marshall: For the existing applications, you don't want to face a situation, in terms of looking at the cloud you might use, that you have to rewrite your code. This is a challenge that the folks that are facing with things such as Google's App Engine or even Saleforce's Force.com. With that approach, it's really a platform, as opposed to an on-demand infrastructure. By a platform I mean there is a set of development tools and a set of application-language expectations that you use in order to take advantage of that platform.

For legacy applications, there's not going to be much opportunity. For those folks, I really don't believe they'll consider, "Gee, I'll get so much benefit out of Salesforce, I'll get so much benefit out of Google, that I'm going to rewrite this code in order to run it on those platforms.

They may actually consider them for new applications that would get some level of benefit by being close to other services that perhaps Google, or for that matter, Salesforce.com might offer. But, for their existing applications, which are mostly what we are talking about here, they won't have of an opportunity to consider those. Instead, they'll look at things such as Amazon's Elastic Compute Cloud, and things that would be offered by a GoGrid or Rackspace, folks in that sort of space.

The considerations for them are going to be, number one, right now the easiest way to run these things in those environments is that it has to be an x86 architecture. There is no PA-RISC or SPARC or IBM's Power architecture there. They don't exist there, so A, it's got to be x86.

And B, the most prevalent cases of applications running in these spaces are run on Linux. The biggest communities of use and biggest communities of support are going to be around Linux. There have been some new enhancements around Microsoft on Amazon. Some of these folks, such as GoGrid, Rackspace, and others, have offered Windows hosting. But here's the challenge with those approaches.

For example, if I were to use Microsoft on Amazon, what I'm doing is booting a Microsoft Amazon Machine Image (AMI), an operating system AMI on Amazon. Then I'm installing my application up there in some fashion. I'm configuring it to make it work for me, and then I'm saving it up there.

The challenge with that is that all that work you just went through to get that application tested, embedded, and running up there on Amazon in the Microsoft configuration that Amazon is supporting is only useful on Amazon.

So, a real consideration for all these folks who are looking at potentially using cloud are saying, "How is it that I can define my application as a working unit, and then be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering?" You're not going to be able to do that, if you define your cloud application as running on Windows and Amazon, because that Amazon AMI is not portable to any of these other places.

Gardner: Portability is a huge part of what people are looking for.

Marshall: Yes. A big consideration is: are you comfortable with Linux technology or other related open-source infrastructure, which has a licensing approach that's going to enable it to truly be portable for you. And, by the way, you don't really want to spend the money for a perpetual license to Windows, for example, even if you could take your Windows up to Amazon.

Taking your own copy of Windows up there isn't possible now. It may be possible in the future, and I think Microsoft will eventually have a business, whereby they license, in an on-demand fashion, the operating system as a hosting unit to be bound to an application, instead of an infrastructure, but they don't do that now.

So, another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, so that I can take a step forward and bind Linux to the workload in order to take it to where I want it to go.

Gardner: Tell us a little bit about what rPath brings to the equation?

Marshall: rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.

It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.

Gardner: It seems to me that that management is essential in order not to just spin out of control and become too complex with too many instances, and with difficulty in managing the virtual environments, even more so than the physical one.

Marshall: It's the lack of friction in being able to quickly deploy a virtualized environment, versus the amount of friction you have in deploying a physical environment. When I say "friction," I mean literally. With a physical environment somebody is going to go grab a server, slam it in a rack, hook up power networking, and allocate it to your account somehow. There is just a lot of friction in procuring, acquiring, and making that capacity available.

In the virtualized world, if someone has already deployed the capital, the physical capital, they can give you access to the virtual capital, the VM, very, very quickly. It's a very quick step to give you that access, but that's a double-edged sword. The reason I say it's a double-edged sword is because if it's really easy to get, people might take more. They might need more already, and they've been constrained by the friction in the process. But, taking more also means you've got to manage more.

You run the risk, if you're not careful. If you make it easy, low friction and low cost, for people to get machines, they will acquire the machine capacity, they will deploy the machine capacity, they will use the machine capacity, but then they will be faced with managing a much larger set of machine capacity than maybe they were comfortable with.

If you don't think about how to make these VMs more manageable than the physical machines to begin with, that friction can be the beginning of a very slippery slop toward a lack of manageability and risk associated with security issues that you can't get your arms around, just because of how broadly these things are deployed.

It can lead to a lot of excess spending, because you are deploying machines that you thought would be temporary, but you never take them back down because, perhaps, it was too difficult to get them configured correctly the first time. So, there are lots of challenges that this lack of friction brings into play that the physical world sort of kept a damper on, because there was only so much capacity you could get.

Gardner: It seems that having a set policy at some level of automation needs to be brought to the table here, and something that will cross between applications and operations in management and something that they can both understand. The old system of just handing things off, without really any kind of a lifecycle approach, simply won't hold up.

Marshall: There are a couple of considerations here. With these things being available as services outside of the IT organization, the IT organization has to be very careful that they find a way to embrace this with their lines of business. If they don't, if they say no to the line-of-business guys, the line-of-business guys are just going to go swipe a credit card on Amazon and say, "I'll show you what no looks like. I will go get my own capacity, I don't need you anymore."

We actually saw some of this with software as a service (SaaS), and it was a very tense negotiation for some time. With SaaS it typically began with the head of sales, who went into the CEO's office, and said, "You know what? I've had it with the CIO, who is telling me I can't have the sales-force automation that I need, because we don't have the capacity or it's going to take years, when I know, I can go turn it on with Salesforce.com right now."

And do you know what the CEO said? The CEO said, “Yes, go turn it on.” And he told the CIO, "Sit down. You're going have to figure out a way to integrate what's going on with Salesforce.com with what we're doing internally, because I am not going to have my sales force constrained."

You're going to see the same thing with the line-of-business guys as it relates to these services being provided. Some smart guy inside Goldman Sachs is going to say, "Look, if I could run 200 Monte Carlo simulation servers over the next two days, we'd have an opportunity to trade in the commodities market. And, I'm being told that I can't have the capacity from IT. Well, that capacity on Amazon is only going to cost me $1,000. I'm taking it, I'm trading, and we're going to make some money for the firm."

What's the CEO going to say? The CEO isn't going to say no. So, the folks in the IT organization have to embrace this and say, "I'll tell you what. If you are going to do this, let me help you do it in a way that takes risk out for the organization. Let me give you an approach that allows you to have this friction-free access, the infrastructure, while also preserving some of the risk, mitigation practices and some of the control practices that we have. Let me help you to find how you are going to use it."

There really is an opportunity for the CIO to say, "Yes, we're going to give you a way to do this, but we are going to do it in a way that it's optimized to take advantage of some of the things we have learned about governance and best practices in terms of deploying applications to an operational IT facility."

Gardner: So, with policy and management, in essence, the control point for the relationship between the applications, perhaps even the relationship between the line-of-business people and the IT folks, needs to be considered with the applications themselves. It seems to me that you need to build them for this new type of management, policy, and governance capability?

Marshall: The IT organization is going to need to take a look at what they've historically done with this air-gap between applications and operations. I describe it as an air-gap, because typically you had this approach, where an application was unit-test complete. Then, it went through a testing matrix -- a gauntlet, if you will -- to go from Dev/Test/QA to production.

There was a set of policies that were largely ingrained in the mind of the release engineers, the build masters, and the folks who were responsible for running it through its paces to get it there. Sometimes, there was some sort of exception process for using certain features that maybe hadn't been approved in production yet. There's an opportunity now to have that process become streamlined by using a system. We've built one, and we've got one that they can codify and put these processes into, if you will, a build system for VMs and have the policies be enforced at the build time so that you are constructing for compliance.

With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.

It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.

Gardner: All right. So, we're really talking about enterprise application's virtualization, but doing it properly, doing it with a lifecycle. This provides an on- ramp to cloud computing and the ability to pick and choose the right hosting and and/or hybrid approaches as these become available.

But we still come back to this tension between the application and the virtual machine. The application traditionally is on the update side and the virtual machine traditionally on the operations, the runtime, and the deployment side.

So we're really talking about trying to get a peanut-butter cup here. It's Halloween, so we can get some candy talk in. We've got peanut butter and chocolate. How do we bring them together?

Marshall: Dana, what you just described exists because people are still thinking about the operating system as something that they bind to the infrastructure. In this case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.

Marshall: I'll give you a story from the financial services industry. I met with an architect who had set up a capability for their lines of business to acquire VMs as part of a provisioning process that allows them to go to a Web page, put in an account number for their line of business, request an environment -- a Linux/Java environment or a Microsoft .NET environment -- and within an hour or so they will get an e-mail back saying, "Your environment or your VMs are available. Here are the host names."

They can then log on to those machines, and a decentralized IT service charges the lines of business based upon the days, weeks, or months they used the machine.

I said, "Well, that's very clever. That's a great step in the right direction." Then, I asked, and “How many of these do you have deployed?" And he said, “Oh, we've got about 1,500 virtual machines deployed over the first nine months.” I said, “Why did you do this to begin with?”

And, he said, “We did it to begin with, because people always requested more than they needed, because they knew they would have to grow. So, they go ahead and procure the machines well ahead of their actual need for the processing power of the machine. We did this so that we feel confident that they could procure extra capacity on-demand, as needed by the group.”

I said, “Well, you know, I'd be interested in this statistic around the other side of that challenge. You want them to procure only what they need, but you want them to give back what they don't need as well.” He kind of looked at me funny, and I said, “Well, what do the statistics look back on the getbacks? I mean, how many machines have you ever gotten back?”

And, he said, “Not a one ever. We've never gotten a single machine back ever.” I said, “Why do you think that it is?” He said, “I don't know and I don't care. I charge them for what they're using.”

I said, “Did you ever stop to think that maybe the reason they're not giving them back is because of the time from when you give them the machine to the time that it's actually operational for them? In other words, what it takes them to install the application, to configure all the system services, to make the application sort of tuned and productive on that host -- that sort of generic host that you gave them. Did you ever think that maybe the reason they are not giving it back is because if they had to go through that again, that would be real pain in the neck?"

So, I asked him, I said, “What's the primary application you are running here anyway?” He said, “Well, 900 of these systems are tick data, Reuters' Ticker Tape data." I said, “That's not even useful on the weekends. Why don't they just give them all back on the weekends and you shut down a big hunk of the datacenter and save on power and cooling?” He said, “I haven’t even thought about it and I don't care, because it's not my problem.”

Gardner: Well it's something of an awfully wasteful approach, where supply and demand are in no way aligned. The days of being able to overlook those wasteful practices are pretty much over, right?

Marshall: There's an opportunity now, if they would think about this problem and say, “Hey. Why am I giving them this, let's say, this Linux Java environment and then having them run through a gauntlet to make it work for every machine, instead of them taking an approach where they define, based upon a system and some policies I have given them, they actually attach the operating system and they configure all of this stuff independent of the production environment. Then, at run-time these things get deployed and are actually productive in a matter or minutes, instead of hours, days, and months.

In that way, they feel comfortable giving me the capacity back, when they are not using it, because they know that they can quickly get the application up and running in the manner it should be configured to run very, very quickly in a very scalable way, in a very elastic way.

That elasticity benefit has been overlooked to date, but it's a benefit that's going to become very important as people do exactly what you just described, which is become sensitive to the notion that a VM idling out there and consuming space is just as bad as a physical machine idling out there and consuming space.

Gardner: I certainly appreciate the problem, the solution set, and the opportunity for significant savings and agility. That's to say, you can move your applications, get them up fast, but you will also, in the long-term, be able to cut your overall cost because of the utilization and using the elasticity to match your supply and demand as closely as possible. The question then is how to get started. How do you move to take advantage of these? Tell us a little bit more about the role that rPath plays in facilitating that.

Marshall: The first thing to do, Dana, is to profile your applications and determine which ones have sort of lumpy demand, because you don't want to work on something that needs to be available all the time and has pretty even demand. Let's go for something that really has lumpy demand, so that we can do the scale-up and give back and get some real value out of it.

So, the first thing to do is an inventory of your applications and say, “What do I have out here that has lumpy demand?” Pick a couple of candidates. Ideally, it's going to be hard to do this without running Linux. It needs to be a workload that will run on Linux, whether you have run it on Linux historically or not. Probably, it needs to be something written in Java, C, C++, Python, Perl, or Ruby -- something that you can move to a Linux platform -- something, that has lumpy demand.

The first step that we get involved in is packaging that application as an application that's optimized to be a VM and to run in a VM. One of rPath’s values here is that the operating system becomes optimized to the application, and the footprint of the operating system and therefore it's management burden, shrinks by about 90 percent.

When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.

The second, is to determine whether you're going to run this internally on some sort of virtualized infrastructure that you've have made available in my infrastructure through VMware, Xen, or even Microsoft Hyper-V for that matter, or are you going to use an external provider?”

We suggest that when you get started with this set, as soon as possible, you should begin experimenting with an external provider. The reason for that is so that you don't put in place a bunch of crutches that are only going to be relevant to your environment and will prevent the application from ever going external. You can never drop the crutches that are associated with your own hand-holding processes that can only happen inside of your organization.

We strongly suggest that one of the first things you do, as you do this proof of concept, is actually do it on Amazon or another provider that offers a virtualized infrastructure. Use an external provider, so that you can prove to yourself that you can define an application and have it be ready to run on an infrastructure that you don't control, because that means that you defined the application truly independent of the infrastructure.

Gardner: And, that puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy periods when you need that exterior scale and capacity, you might just look to that cloud provider to support that application in that fashion.

Marshall: That's exactly right, whereas, if you prove all this out internally only, you may come across a huge "oops" that you didn't even think about, as you try to move it externally. You may find that you've driven yourself down in architectural box canyon that you just can't get out of.

So, we strongly suggest to folks that you experiment with this proof of concept, using an external, and then bring it back internally and prove that you can run it internally, after you've proven that you can run it externally.

Gardner: Your capital cost for that are pretty meager or nothing, and then your operating cost will benefit in the long run, because you will have those hybrid options.

Marshall: Another benefit of starting external for one of these things is that the cost at the margin for doing this is so cheap. It's between 10 and 50 cents per CPU hour to set up the Amazon environment and to run it, and if you run it for an hour you pay the 10 cents, it's not like you have to commit to some pre-buy or some amount of infrastructure. It's truly on demand. What you really use is what you pay for. So, there's no reason from a cost perspective not to look at running your first instance, of an on-demand, virtualized application externally.

Gardner: And, if you do it in this fashion, you're able to have that portability. You can take it in, and you can put it out. You've built it for that and there is no hurdle you have to overcome for that portability.

Marshall: If you prove to yourself that you can do it, that you can run it in both places, you've architected correctly. There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved.

Gardner: Another thing that people like about the idea of virtualizing applications is that you get a single image of the application. You can patch it, manage it, upgrade it, and that is done once, and it doesn't have to be delivered out to a myriad of machines, with configuration issues and so forth. Is that the case in this hybrid environment, as well, or you can have this single image for the amount of capacity you need locally, and then for that extra capacity at those peak times, from an external cloud?

Marshall: I think you've got to be careful here, because I don't believe that one approach is going to work in every case. I'll give you an example. I was meeting with a different financial services firm who said, “Look, of our biggest application, we've got -- I think it was 1,500 or 2,000 -- instances of that application running." And he said, “I'm not going to flood the network with 1,500 new machines, when I have to make changes to that.” So, we are going to upgrade those VMs in place.

We're going to have each one of them access some sort of lifecycle management capability. That's another benefit we provide and we provide benefits in two ways. One, we've got a very elegant system for delivering maintenance and updates to a running system. And two, since you've only got 10 percent of the operating system there you're patching one-tenth as often, because operating system is typically the catalyst for most of the patching associated with security issues and other things.

I think there are going to be two things happening here. People are going to maintain these releases of applications as VMs, which you may want to think of as a repository of available application VMs that are in a known good state, and that are up-to-date and things like that.

And in some cases whenever new demand needs to come on line the known good state is going to be deployed and they won't deploy it and then patch it after they deploy it. It will be deployed and it won't need patching. But at the same time, there will be deployed units that are running that they will want to patch, and they need to be able to do that without having to distribute, dump the data, backup the data, kill the image, bring a new image up and then reload the data.

In many cases, you're going to want to see these folks actually be able to patch in place as well. The beauty of it is, you don't have to choose. They can be both. It doesn't have to be one or the other.

Gardner: So that brings us back to the notion of good management, policies, governance, and automation, because of this lifecycle. It's not simply a matter of putting that application up, and getting some productivity from utilization, but it's considering this entire sunrise-to-sunset approach as well.

Marshall: Right, and that also involves having the ability to do some high-quality scaling on-demand to be able to call an API to add a new system and to be able to do that elegantly, without someone having to log into the system and thrash around configuring it to make it aware of the environment that it's supposed to be supporting.

There are quite a few considerations here, when you're defining applications as VMs, and you are defining them independent of where they run, you are not going to use any crutches associated with your internal infrastructure to be able to elastically scale up and scale back.

There are some interesting new problems that come up here that also are new opportunities to do things better. This whole notion of architecting in a way that is A, optimized for virtualization. In other words, if you are going to make it easy to get extra machines, you'd better make machines easy to manage, and you'd better make them manageable on the hypervisor that they are running on. And B, you need to have a way to add capacity in an elegant way that doesn't require folks logging in and doing a lot of manual work in order to be able to scale these things up.

Gardner: And then, in order to adopt a path to cloud benefits, you just start thinking about the steps across virtualization, thinking a bit more holistically about the virtualized environment and applications as being one and the same. The level of experimentation gives you the benefits, and ultimately you'll be building a real fabric and a governed best methods approach to cloud computing.

Marshall: The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice, and the lack of the elasticity that cloud computing promises. If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.

Gardner: Well, great. I just want to make sure that we covered that entire stepping process into adoption and use. Did we leave anything out?

Marshall: What we didn't talk about was what should be possible at the end of the day.

Gardner: What's that gold ring out there that you want to be chasing after?

Marshall: Nirvana would look like something that we call a "hyper cloud concept," where you are actually sourcing demand by the day or hour, based upon service level experience, performance experience, and security experience with some sort of intelligent system analyzing the state of your applications and the demand for those applications and autonomically acquiring capacity and putting that capacity in place for your applications across multiple different providers.

Again, it's based upon the set of experiences that you cataloged around what's the security profile that these guys provide? What's the performance profile that they provide? And, what's the price profile that they provide.

Ultimately, you should have a handful of providers out there that you are sourcing your applications against and sourcing them day-by-day, based upon the needs of your organization and the evolving capabilities of these providers. And, that's going to be a while.

In the near term, people will choose one or two cloud providers and they will develop a rapport on a comfortable level. If they do this right, over time they will be able to get the best price and the best performance, because they will never be in a situation where they can't bring it back and put it somewhere else. That's what we call the hyper cloud approach. It's a ways off, it's going to take some time, but I think it's possible.

Gardner: The nice thing about it is that your business outcomes are your start and your finish point. In many cases today, your business outcomes are, in some ways, hostage to whatever the platform in IT requirements are, and then that's become a problem.

Marshall: Right. It can be.

Gardner: Well, terrific. We've been talking about cloud computing and proper on-ramps to approach and use clouds, and also how enterprises can best prepare to bring their applications into a virtual development and deployment environment.

We've been joined by Billy Marshall, a founder and chief strategy officer at rPath. I certainly appreciate your time, Billy.

Marshall: Dana, it's been a pleasure, thanks for the conversation.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.