Showing posts with label Marshall. Show all posts
Showing posts with label Marshall. Show all posts

Friday, November 14, 2008

Interview: rPath’s Billy Marshall on How Enterprises Can Follow a Practical Path to Virtualized Applications

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on proper on-ramps to cloud computing, and how enterprises can best prepare to bring applications into a virtual development and deployment environment.

While much has been said about cloud computing in 2008, the use of virtualization is ramping up rapidly. Moreover, enterprises are moving from infrastructure virtualization to application-level virtualization.

We're going to look at how definition and enforcement of policies helps ensure conformance and consistency for virtual applications across their lifecycle. Managing virtualized applications holistically is an essential ingredient in making cloud-computing approaches as productive as possible while avoiding risk and onerous complexity.

To provide the full story on virtual applications lifecycle, methods and benefits, I'm joined by Billy Marshall, the founder of rPath, as well as their chief strategy officer. Welcome to the show, Billy.

Billy Marshall: Thanks, Dana, great to be here.

Gardner: There is a great deal going on with technology trends, the ramp up of virtualization, cloud computing, services-oriented architecture (SOA), use of new tools, light-weight development environments, and so forth. We're also faced unfortunately with a tough economic climate, as a global recession appears to be developing.

What’s been interesting for me is that this whole technological trend-shift and this economic imperative really form a catalyst to a transformative IT phase that we are entering. That is to say, the opportunity to do more with less is really right on the top of the list for IT decision-makers and architects.

Tell me, if you would, how some of these technology benefits and the need to heighten productivity fit and come together.

Marshall: Dana, we've seen this before, and specifically I have seen it before. I inherited the North America sales role at Red Hat in April of 2001, and of course shortly thereafter, in September of 2001, we had the terrible 9/11 situation that changed a lot of the thinking.

The dot-com bubble burst, and it turned out to be a catalyst for driving Linux into a lot of enterprises that previously weren't thinking about it before. They began to question their assumptions about how much they were willing to pay for certain types of technologies, and in this case it happened to be the Unix technology. In most cases they were buying from Sun and that became subject of a great deal of scrutiny. Much of it was replaced in the period from 2001to 2003 and into 2004 with Linux technology.

We're once again facing a similar situation now, where people, enterprises specifically, are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. I don't think there's any doubt in people's mind that they are getting good value out of doing things with IT, and a lot of these businesses are driven by information technology.

At the same time, this credit crunch is going to have folks look very hard at large-scale outlays of capital for data centers. I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud, as it relates to being able to get it when you need it, get it at variable cost, and get it on demand.

Gardner: Obviously, there's a tremendous amount of economic value to be had in cloud computing, but some significant risks as well. As we look at how virtualization increases utilization of servers and provide the dynamic ability to fire up platform and instances of run-time and actual applications with a stack beneath them, it really allows companies to increase their applications with a lower capital expenditure upfront and also cut their operating costs. Then, we can have administrators and architects managing many more applications, if it's automated and governed properly. So let's get into this notion of doing it right.

When we have more and more applications and services, there is, on one side, a complexity problem. There is also this huge utilization benefit. What's the first step in getting this right in terms of a lifecycle and a governance mentality?

Marshall: Let's talk first about why utilization was a problem without virtualization. Let's talk about old architecture for a minute, and then we can talk about, what might be the benefits of a new architecture if done correctly.

Historically, in the enterprise you would get somewhere between 15 and 18 percent utilization for server applications. So, there are lots of cycles available on a machine and you may have two machines running side-by-side, running two very different workloads, whose cycles are very different. Yet, people wouldn't run multiple applications on the same server setup in most cases, because of the lack of isolation when you are sharing processes in the operating system on the server. Very often, these things would conflict with one another.

During maintenance, maintenance required for one would conflict with the other. It's just a very challenging architecture to try to run multiple things on the same physical, logical host. Virtualization provides isolation for applications running their own logical server, their own virtual server.

So, you could put multiples of them on the same physical host and you get much higher utilization. You'll see folks getting on the order of 50, 70, or 80 percent utilization without any of the worries about the conflicts that used to arise when you tried to run multiple applications sharing processes on the same physical host with an operating system.

That's the architecture we're evolving towards, but if you think about it, Dana, what virtualization gives you from a business perspective, other than utilization is an opportunity to decouple the definition of the application from the system that it runs on.

Historically, you would install an application onto the physical host with the operating system on it. Then, you would work with it and massage it to get it right for that application. Now, you can do all that work independent of the physical host, and then, at run-time, you can decide where you have capacity that best meets needs of the profile of this application.

Most folks have simply gone down the road of creating a virtual machine (VM) with their typical, physical-host approach, and then doing a snapshot, saying, "Okay, now I worry about where to deploy this."

In many cases, they get locked into the hypervisor or the type of virtualization they may have done for that application. If they were to back up one or two steps and say, “Boy, this really does give me an opportunity to define this application in a way that if I wanted to run it on Amazon's EC2, I probably could, but I could also run it my own data center.”

Now, I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own datacenter, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.

Gardner: I see. So, we need to rethink the application, so that we can run that application on a variety of these new sourcing options that have arisen, be they on premises, off premises, or perhaps with a hybrid.

Marshall: I think it will be a hybrid, Dana. I think for very small companies, who don't even have the capital option of putting up a data center, they will go straight to an on-demand cloud-type approach. But, for the enterprise that is going to be invested in the data center anyway at some level, they simply get an opportunity to right-size that infrastructure, based upon the profile of applications that really need to be run internally, whether for security, latency, data-sensitivity, or whatever reason.

But, they'll have the option for things that are portable, as it relates to their security, performance, profile, as it relates to the nature of the workload, to make them portable. We saw this very same thing with Linux adoption post 9/11. The things that could be moved off of Solaris easily were moved off. Some things were hard to move, and they didn't move them. It didn't make sense to move them, because it cost too much to move them.

I think we're going to see the same sort of hybrid approach take hold. Enterprise folks will say, “Look, why do I need to own the servers associated with doing the monthly analysis of the log files associated with access to this database for a compliance reason. And, then the rest of the month, that server just sits idle. "Why do I want to do that for that type of workload? Why do I want to own the capacity and have it be captive for that type of workload?"

That would be a perfect example of a workload where it says, I am going to crunch those logs once a month up on Amazon or Rackspace or some place like that, and I am going to pay for a day-and-a-half of capacity and then I am going to turn it off.

Gardner: So, there's going to be a decision process inside each organization, probably quite specific to each organization, about which applications should be hosted in which ways. That might include internal and external sourcing options. But, to be able to do that you have to approach these applications thoughtfully, and you also have to create your new applications. With this multi-dimensional hosting possibility set, if you will, it might. What steps need to be taken at the application level for both the existing and the newer apps?

Marshall: For the existing applications, you don't want to face a situation, in terms of looking at the cloud you might use, that you have to rewrite your code. This is a challenge that the folks that are facing with things such as Google's App Engine or even Saleforce's Force.com. With that approach, it's really a platform, as opposed to an on-demand infrastructure. By a platform I mean there is a set of development tools and a set of application-language expectations that you use in order to take advantage of that platform.

For legacy applications, there's not going to be much opportunity. For those folks, I really don't believe they'll consider, "Gee, I'll get so much benefit out of Salesforce, I'll get so much benefit out of Google, that I'm going to rewrite this code in order to run it on those platforms.

They may actually consider them for new applications that would get some level of benefit by being close to other services that perhaps Google, or for that matter, Salesforce.com might offer. But, for their existing applications, which are mostly what we are talking about here, they won't have of an opportunity to consider those. Instead, they'll look at things such as Amazon's Elastic Compute Cloud, and things that would be offered by a GoGrid or Rackspace, folks in that sort of space.

The considerations for them are going to be, number one, right now the easiest way to run these things in those environments is that it has to be an x86 architecture. There is no PA-RISC or SPARC or IBM's Power architecture there. They don't exist there, so A, it's got to be x86.

And B, the most prevalent cases of applications running in these spaces are run on Linux. The biggest communities of use and biggest communities of support are going to be around Linux. There have been some new enhancements around Microsoft on Amazon. Some of these folks, such as GoGrid, Rackspace, and others, have offered Windows hosting. But here's the challenge with those approaches.

For example, if I were to use Microsoft on Amazon, what I'm doing is booting a Microsoft Amazon Machine Image (AMI), an operating system AMI on Amazon. Then I'm installing my application up there in some fashion. I'm configuring it to make it work for me, and then I'm saving it up there.

The challenge with that is that all that work you just went through to get that application tested, embedded, and running up there on Amazon in the Microsoft configuration that Amazon is supporting is only useful on Amazon.

So, a real consideration for all these folks who are looking at potentially using cloud are saying, "How is it that I can define my application as a working unit, and then be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering?" You're not going to be able to do that, if you define your cloud application as running on Windows and Amazon, because that Amazon AMI is not portable to any of these other places.

Gardner: Portability is a huge part of what people are looking for.

Marshall: Yes. A big consideration is: are you comfortable with Linux technology or other related open-source infrastructure, which has a licensing approach that's going to enable it to truly be portable for you. And, by the way, you don't really want to spend the money for a perpetual license to Windows, for example, even if you could take your Windows up to Amazon.

Taking your own copy of Windows up there isn't possible now. It may be possible in the future, and I think Microsoft will eventually have a business, whereby they license, in an on-demand fashion, the operating system as a hosting unit to be bound to an application, instead of an infrastructure, but they don't do that now.

So, another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, so that I can take a step forward and bind Linux to the workload in order to take it to where I want it to go.

Gardner: Tell us a little bit about what rPath brings to the equation?

Marshall: rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.

It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.

Gardner: It seems to me that that management is essential in order not to just spin out of control and become too complex with too many instances, and with difficulty in managing the virtual environments, even more so than the physical one.

Marshall: It's the lack of friction in being able to quickly deploy a virtualized environment, versus the amount of friction you have in deploying a physical environment. When I say "friction," I mean literally. With a physical environment somebody is going to go grab a server, slam it in a rack, hook up power networking, and allocate it to your account somehow. There is just a lot of friction in procuring, acquiring, and making that capacity available.

In the virtualized world, if someone has already deployed the capital, the physical capital, they can give you access to the virtual capital, the VM, very, very quickly. It's a very quick step to give you that access, but that's a double-edged sword. The reason I say it's a double-edged sword is because if it's really easy to get, people might take more. They might need more already, and they've been constrained by the friction in the process. But, taking more also means you've got to manage more.

You run the risk, if you're not careful. If you make it easy, low friction and low cost, for people to get machines, they will acquire the machine capacity, they will deploy the machine capacity, they will use the machine capacity, but then they will be faced with managing a much larger set of machine capacity than maybe they were comfortable with.

If you don't think about how to make these VMs more manageable than the physical machines to begin with, that friction can be the beginning of a very slippery slop toward a lack of manageability and risk associated with security issues that you can't get your arms around, just because of how broadly these things are deployed.

It can lead to a lot of excess spending, because you are deploying machines that you thought would be temporary, but you never take them back down because, perhaps, it was too difficult to get them configured correctly the first time. So, there are lots of challenges that this lack of friction brings into play that the physical world sort of kept a damper on, because there was only so much capacity you could get.

Gardner: It seems that having a set policy at some level of automation needs to be brought to the table here, and something that will cross between applications and operations in management and something that they can both understand. The old system of just handing things off, without really any kind of a lifecycle approach, simply won't hold up.

Marshall: There are a couple of considerations here. With these things being available as services outside of the IT organization, the IT organization has to be very careful that they find a way to embrace this with their lines of business. If they don't, if they say no to the line-of-business guys, the line-of-business guys are just going to go swipe a credit card on Amazon and say, "I'll show you what no looks like. I will go get my own capacity, I don't need you anymore."

We actually saw some of this with software as a service (SaaS), and it was a very tense negotiation for some time. With SaaS it typically began with the head of sales, who went into the CEO's office, and said, "You know what? I've had it with the CIO, who is telling me I can't have the sales-force automation that I need, because we don't have the capacity or it's going to take years, when I know, I can go turn it on with Salesforce.com right now."

And do you know what the CEO said? The CEO said, “Yes, go turn it on.” And he told the CIO, "Sit down. You're going have to figure out a way to integrate what's going on with Salesforce.com with what we're doing internally, because I am not going to have my sales force constrained."

You're going to see the same thing with the line-of-business guys as it relates to these services being provided. Some smart guy inside Goldman Sachs is going to say, "Look, if I could run 200 Monte Carlo simulation servers over the next two days, we'd have an opportunity to trade in the commodities market. And, I'm being told that I can't have the capacity from IT. Well, that capacity on Amazon is only going to cost me $1,000. I'm taking it, I'm trading, and we're going to make some money for the firm."

What's the CEO going to say? The CEO isn't going to say no. So, the folks in the IT organization have to embrace this and say, "I'll tell you what. If you are going to do this, let me help you do it in a way that takes risk out for the organization. Let me give you an approach that allows you to have this friction-free access, the infrastructure, while also preserving some of the risk, mitigation practices and some of the control practices that we have. Let me help you to find how you are going to use it."

There really is an opportunity for the CIO to say, "Yes, we're going to give you a way to do this, but we are going to do it in a way that it's optimized to take advantage of some of the things we have learned about governance and best practices in terms of deploying applications to an operational IT facility."

Gardner: So, with policy and management, in essence, the control point for the relationship between the applications, perhaps even the relationship between the line-of-business people and the IT folks, needs to be considered with the applications themselves. It seems to me that you need to build them for this new type of management, policy, and governance capability?

Marshall: The IT organization is going to need to take a look at what they've historically done with this air-gap between applications and operations. I describe it as an air-gap, because typically you had this approach, where an application was unit-test complete. Then, it went through a testing matrix -- a gauntlet, if you will -- to go from Dev/Test/QA to production.

There was a set of policies that were largely ingrained in the mind of the release engineers, the build masters, and the folks who were responsible for running it through its paces to get it there. Sometimes, there was some sort of exception process for using certain features that maybe hadn't been approved in production yet. There's an opportunity now to have that process become streamlined by using a system. We've built one, and we've got one that they can codify and put these processes into, if you will, a build system for VMs and have the policies be enforced at the build time so that you are constructing for compliance.

With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.

It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.

Gardner: All right. So, we're really talking about enterprise application's virtualization, but doing it properly, doing it with a lifecycle. This provides an on- ramp to cloud computing and the ability to pick and choose the right hosting and and/or hybrid approaches as these become available.

But we still come back to this tension between the application and the virtual machine. The application traditionally is on the update side and the virtual machine traditionally on the operations, the runtime, and the deployment side.

So we're really talking about trying to get a peanut-butter cup here. It's Halloween, so we can get some candy talk in. We've got peanut butter and chocolate. How do we bring them together?

Marshall: Dana, what you just described exists because people are still thinking about the operating system as something that they bind to the infrastructure. In this case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.

Marshall: I'll give you a story from the financial services industry. I met with an architect who had set up a capability for their lines of business to acquire VMs as part of a provisioning process that allows them to go to a Web page, put in an account number for their line of business, request an environment -- a Linux/Java environment or a Microsoft .NET environment -- and within an hour or so they will get an e-mail back saying, "Your environment or your VMs are available. Here are the host names."

They can then log on to those machines, and a decentralized IT service charges the lines of business based upon the days, weeks, or months they used the machine.

I said, "Well, that's very clever. That's a great step in the right direction." Then, I asked, and “How many of these do you have deployed?" And he said, “Oh, we've got about 1,500 virtual machines deployed over the first nine months.” I said, “Why did you do this to begin with?”

And, he said, “We did it to begin with, because people always requested more than they needed, because they knew they would have to grow. So, they go ahead and procure the machines well ahead of their actual need for the processing power of the machine. We did this so that we feel confident that they could procure extra capacity on-demand, as needed by the group.”

I said, “Well, you know, I'd be interested in this statistic around the other side of that challenge. You want them to procure only what they need, but you want them to give back what they don't need as well.” He kind of looked at me funny, and I said, “Well, what do the statistics look back on the getbacks? I mean, how many machines have you ever gotten back?”

And, he said, “Not a one ever. We've never gotten a single machine back ever.” I said, “Why do you think that it is?” He said, “I don't know and I don't care. I charge them for what they're using.”

I said, “Did you ever stop to think that maybe the reason they're not giving them back is because of the time from when you give them the machine to the time that it's actually operational for them? In other words, what it takes them to install the application, to configure all the system services, to make the application sort of tuned and productive on that host -- that sort of generic host that you gave them. Did you ever think that maybe the reason they are not giving it back is because if they had to go through that again, that would be real pain in the neck?"

So, I asked him, I said, “What's the primary application you are running here anyway?” He said, “Well, 900 of these systems are tick data, Reuters' Ticker Tape data." I said, “That's not even useful on the weekends. Why don't they just give them all back on the weekends and you shut down a big hunk of the datacenter and save on power and cooling?” He said, “I haven’t even thought about it and I don't care, because it's not my problem.”

Gardner: Well it's something of an awfully wasteful approach, where supply and demand are in no way aligned. The days of being able to overlook those wasteful practices are pretty much over, right?

Marshall: There's an opportunity now, if they would think about this problem and say, “Hey. Why am I giving them this, let's say, this Linux Java environment and then having them run through a gauntlet to make it work for every machine, instead of them taking an approach where they define, based upon a system and some policies I have given them, they actually attach the operating system and they configure all of this stuff independent of the production environment. Then, at run-time these things get deployed and are actually productive in a matter or minutes, instead of hours, days, and months.

In that way, they feel comfortable giving me the capacity back, when they are not using it, because they know that they can quickly get the application up and running in the manner it should be configured to run very, very quickly in a very scalable way, in a very elastic way.

That elasticity benefit has been overlooked to date, but it's a benefit that's going to become very important as people do exactly what you just described, which is become sensitive to the notion that a VM idling out there and consuming space is just as bad as a physical machine idling out there and consuming space.

Gardner: I certainly appreciate the problem, the solution set, and the opportunity for significant savings and agility. That's to say, you can move your applications, get them up fast, but you will also, in the long-term, be able to cut your overall cost because of the utilization and using the elasticity to match your supply and demand as closely as possible. The question then is how to get started. How do you move to take advantage of these? Tell us a little bit more about the role that rPath plays in facilitating that.

Marshall: The first thing to do, Dana, is to profile your applications and determine which ones have sort of lumpy demand, because you don't want to work on something that needs to be available all the time and has pretty even demand. Let's go for something that really has lumpy demand, so that we can do the scale-up and give back and get some real value out of it.

So, the first thing to do is an inventory of your applications and say, “What do I have out here that has lumpy demand?” Pick a couple of candidates. Ideally, it's going to be hard to do this without running Linux. It needs to be a workload that will run on Linux, whether you have run it on Linux historically or not. Probably, it needs to be something written in Java, C, C++, Python, Perl, or Ruby -- something that you can move to a Linux platform -- something, that has lumpy demand.

The first step that we get involved in is packaging that application as an application that's optimized to be a VM and to run in a VM. One of rPath’s values here is that the operating system becomes optimized to the application, and the footprint of the operating system and therefore it's management burden, shrinks by about 90 percent.

When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.

The second, is to determine whether you're going to run this internally on some sort of virtualized infrastructure that you've have made available in my infrastructure through VMware, Xen, or even Microsoft Hyper-V for that matter, or are you going to use an external provider?”

We suggest that when you get started with this set, as soon as possible, you should begin experimenting with an external provider. The reason for that is so that you don't put in place a bunch of crutches that are only going to be relevant to your environment and will prevent the application from ever going external. You can never drop the crutches that are associated with your own hand-holding processes that can only happen inside of your organization.

We strongly suggest that one of the first things you do, as you do this proof of concept, is actually do it on Amazon or another provider that offers a virtualized infrastructure. Use an external provider, so that you can prove to yourself that you can define an application and have it be ready to run on an infrastructure that you don't control, because that means that you defined the application truly independent of the infrastructure.

Gardner: And, that puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy periods when you need that exterior scale and capacity, you might just look to that cloud provider to support that application in that fashion.

Marshall: That's exactly right, whereas, if you prove all this out internally only, you may come across a huge "oops" that you didn't even think about, as you try to move it externally. You may find that you've driven yourself down in architectural box canyon that you just can't get out of.

So, we strongly suggest to folks that you experiment with this proof of concept, using an external, and then bring it back internally and prove that you can run it internally, after you've proven that you can run it externally.

Gardner: Your capital cost for that are pretty meager or nothing, and then your operating cost will benefit in the long run, because you will have those hybrid options.

Marshall: Another benefit of starting external for one of these things is that the cost at the margin for doing this is so cheap. It's between 10 and 50 cents per CPU hour to set up the Amazon environment and to run it, and if you run it for an hour you pay the 10 cents, it's not like you have to commit to some pre-buy or some amount of infrastructure. It's truly on demand. What you really use is what you pay for. So, there's no reason from a cost perspective not to look at running your first instance, of an on-demand, virtualized application externally.

Gardner: And, if you do it in this fashion, you're able to have that portability. You can take it in, and you can put it out. You've built it for that and there is no hurdle you have to overcome for that portability.

Marshall: If you prove to yourself that you can do it, that you can run it in both places, you've architected correctly. There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved.

Gardner: Another thing that people like about the idea of virtualizing applications is that you get a single image of the application. You can patch it, manage it, upgrade it, and that is done once, and it doesn't have to be delivered out to a myriad of machines, with configuration issues and so forth. Is that the case in this hybrid environment, as well, or you can have this single image for the amount of capacity you need locally, and then for that extra capacity at those peak times, from an external cloud?

Marshall: I think you've got to be careful here, because I don't believe that one approach is going to work in every case. I'll give you an example. I was meeting with a different financial services firm who said, “Look, of our biggest application, we've got -- I think it was 1,500 or 2,000 -- instances of that application running." And he said, “I'm not going to flood the network with 1,500 new machines, when I have to make changes to that.” So, we are going to upgrade those VMs in place.

We're going to have each one of them access some sort of lifecycle management capability. That's another benefit we provide and we provide benefits in two ways. One, we've got a very elegant system for delivering maintenance and updates to a running system. And two, since you've only got 10 percent of the operating system there you're patching one-tenth as often, because operating system is typically the catalyst for most of the patching associated with security issues and other things.

I think there are going to be two things happening here. People are going to maintain these releases of applications as VMs, which you may want to think of as a repository of available application VMs that are in a known good state, and that are up-to-date and things like that.

And in some cases whenever new demand needs to come on line the known good state is going to be deployed and they won't deploy it and then patch it after they deploy it. It will be deployed and it won't need patching. But at the same time, there will be deployed units that are running that they will want to patch, and they need to be able to do that without having to distribute, dump the data, backup the data, kill the image, bring a new image up and then reload the data.

In many cases, you're going to want to see these folks actually be able to patch in place as well. The beauty of it is, you don't have to choose. They can be both. It doesn't have to be one or the other.

Gardner: So that brings us back to the notion of good management, policies, governance, and automation, because of this lifecycle. It's not simply a matter of putting that application up, and getting some productivity from utilization, but it's considering this entire sunrise-to-sunset approach as well.

Marshall: Right, and that also involves having the ability to do some high-quality scaling on-demand to be able to call an API to add a new system and to be able to do that elegantly, without someone having to log into the system and thrash around configuring it to make it aware of the environment that it's supposed to be supporting.

There are quite a few considerations here, when you're defining applications as VMs, and you are defining them independent of where they run, you are not going to use any crutches associated with your internal infrastructure to be able to elastically scale up and scale back.

There are some interesting new problems that come up here that also are new opportunities to do things better. This whole notion of architecting in a way that is A, optimized for virtualization. In other words, if you are going to make it easy to get extra machines, you'd better make machines easy to manage, and you'd better make them manageable on the hypervisor that they are running on. And B, you need to have a way to add capacity in an elegant way that doesn't require folks logging in and doing a lot of manual work in order to be able to scale these things up.

Gardner: And then, in order to adopt a path to cloud benefits, you just start thinking about the steps across virtualization, thinking a bit more holistically about the virtualized environment and applications as being one and the same. The level of experimentation gives you the benefits, and ultimately you'll be building a real fabric and a governed best methods approach to cloud computing.

Marshall: The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice, and the lack of the elasticity that cloud computing promises. If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.

Gardner: Well, great. I just want to make sure that we covered that entire stepping process into adoption and use. Did we leave anything out?

Marshall: What we didn't talk about was what should be possible at the end of the day.

Gardner: What's that gold ring out there that you want to be chasing after?

Marshall: Nirvana would look like something that we call a "hyper cloud concept," where you are actually sourcing demand by the day or hour, based upon service level experience, performance experience, and security experience with some sort of intelligent system analyzing the state of your applications and the demand for those applications and autonomically acquiring capacity and putting that capacity in place for your applications across multiple different providers.

Again, it's based upon the set of experiences that you cataloged around what's the security profile that these guys provide? What's the performance profile that they provide? And, what's the price profile that they provide.

Ultimately, you should have a handful of providers out there that you are sourcing your applications against and sourcing them day-by-day, based upon the needs of your organization and the evolving capabilities of these providers. And, that's going to be a while.

In the near term, people will choose one or two cloud providers and they will develop a rapport on a comfortable level. If they do this right, over time they will be able to get the best price and the best performance, because they will never be in a situation where they can't bring it back and put it somewhere else. That's what we call the hyper cloud approach. It's a ways off, it's going to take some time, but I think it's possible.

Gardner: The nice thing about it is that your business outcomes are your start and your finish point. In many cases today, your business outcomes are, in some ways, hostage to whatever the platform in IT requirements are, and then that's become a problem.

Marshall: Right. It can be.

Gardner: Well, terrific. We've been talking about cloud computing and proper on-ramps to approach and use clouds, and also how enterprises can best prepare to bring their applications into a virtual development and deployment environment.

We've been joined by Billy Marshall, a founder and chief strategy officer at rPath. I certainly appreciate your time, Billy.

Marshall: Dana, it's been a pleasure, thanks for the conversation.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.