Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Today, we present a sponsored podcast discussion on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.
Traditional capacity planning is not enough in cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.
We'll look at how to best right-size applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.
We'll also examine how quality control for these applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.
To unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. Welcome to BriefingsDirect, Neil.
Neil Ashizawa: Thanks very much, Dana.
Gardner: As we've been looking at cloud computing over the past several years, there is a long transition taking place of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.
Traditional capacity planning therefore needs to be refactored and reexamined. Tell me, if you could, Neil, why capacity planning, as people currently understand it, isn’t going to work in a cloud environment?
Ashizawa: Old-fashioned capacity planning would focus on the peak usage of the application, and it had to, because when you were deploying applications in house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes, with long procurement cycles, you'd have to plan for that.
In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.
The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low level usage case as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.
Gardner: So, this isn’t just a matter of spinning up an application and making sure that it could reach a peak load of some sort. We have a new kind of a problem, which is how to be efficient across any number of different load requirements?
Ashizawa: That’s exactly right. If you were to take, for instance, the old-school capacity-planning ideology to the cloud, what you would do is provision for your peak use case. You would scale up your elasticity in the cloud and just keep it there. If you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity and paying for only what you need at that moment.
If I'm at a slow period of my applications usage, then I don’t want to be over provisioned for my peak usage. One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.
Gardner: I suppose also the holy grail of this cloud-computing vision that we've all been working on lately is the idea of being able to spin up those required instances of an application, not necessarily in your private cloud, but in any number of third-party clouds, when the requirements dictate that.
Ashizawa: That’s correct.
Gardner: Now, we call that hybrid computing. Is what you are working on now something that’s ready for hybrid or are you mostly focused on private-cloud implementation at this point?
Ashizawa: What we're bringing to the market works in all three cases. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.
Gardner: HP announced, back in the spring of 2009, a Cloud Assure package that focused on things like security, availability, and performance. I suppose now, because of the economy and the need for people to reduce cost, look at the big picture about their architectures, workloads, and resources, and think about energy and carbon footprints, we've now taken this a step further.
Perhaps you could explain the December 2009 announcement that HP has for the next generation or next movement in this Cloud Assure solution set.
Making the road smoother
Ashizawa: The idea behind Cloud Assure, in general, is that we want to assist enterprises in their migration to the cloud and we want to make the road smoother for them.
Just as you said, when we first launched Cloud Assure earlier this year, we focused on the top three inhibitors, which were security of applications in the cloud, performance of applications in the cloud, and availability of applications in the cloud. We wanted to provide assurance to enterprises that their applications will be secure, they will perform, and they will be available when they are running in the cloud.
The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.
If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.
The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.
If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.
When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.
Gardner: I think we need to have more insight into the nature of the application, rather than simply throwing additional instances of the application. Is that it at a very simple level?
Ashizawa: That’s it at a very simple level. Just to make it even simpler, applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.
Gardner: Can we do this tuning with existing applications -- you mentioned Java apps, for example -- or is this something for greenfield applications that we are creating newly for these cloud scenarios?
Java and .NET
Ashizawa: Our enhancement to Cloud Assure, which is Cloud Assure for cost control, focuses more on the Java and the .NET type applications.
Gardner: And those would be existing applications or newer ones?
Ashizawa: Either. Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.
Gardner: Is this new set software, services, both? Maybe you could describe exactly what it is that you are coming to market with.
Ashizawa: Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.
The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.
The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.
The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.
When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.
All of this is backed by services that HP SaaS provides. We can perform load testing. We can set up the monitoring. We can do the code level performance diagnostics, integrate that all into one console, and help customers right-size the applications in the cloud.
Gardner: That sounds interesting, and, of course, harkens back to the days of distributed computing. We're just adding another level of complexity, that is to say, a sourcing continuum of some sort that needs to be managed as well. It seems to me that you need to start thinking about managing that complexity fairly early in this movement to cloud.
Ashizawa: Definitely. If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.
Tuning the application
The nice thing about Cloud Assure for cost control is that, if you run into performance issues after you have gone live, you can still use the service. You could come in and we could help you right-size your application and help you tune it. Then, you can start getting the global scale you wish at the right cost.
Gardner: One of the other interesting aspects of cloud is that it affects both design time and runtime. Where does something like the Cloud Assure for cost control kick in? Is it something that developers should be doing? Is it something you would do before you go into production, or if you are moving from traditional production into cloud production, or maybe all the above?
Ashizawa: All of the above. HP definitely recommends our best practice, which is to do all your elasticity planning before you go into production, whether it’s a net new application that you are rolling out in the cloud or a legacy application that you are transferring to the cloud.
Given the elastic nature of the cloud, we recommend that you get out ahead of it, do your proper elasticity planning, tune your system, and right-size it. Then, you'll get the most optimized cost and predictable cost, so that you can budget for it.
Gardner: It also strikes me, Neil, that we're looking at producing a very interesting and efficient feedback loop here. When we go into cloud instances, where we are firing up dynamic instances of support and workloads for application, we can use something like Cloud Assure to identify any shortcomings in the application.
We can take that back and use that as we do a refresh in that application, as we do more code work, or even go into a new version or some sort. Are we creating a virtual feedback loop by going into something like Cloud Assure?
Ashizawa: I can definitely see that being that case. I'm sure that there are many situations where we might be able to find something inefficient within the code level layer or within the database SQL statement layer. We can point out problems that may not have surfaced in an on-premise type deployment, where you go to the cloud, do your elasticity planning, and right-size. We can uncover some problems that may not have been addressed earlier, and then you can create this feedback loop.
One of the side benefits obviously to right-sizing applications and controlling cost is to mitigate risk. Once you have elasticity planned correctly and once you have right-sized correctly, you can deploy with a lot more confidence that your application will scale to handle global class and support your business.
Gardner: Very interesting. Because this is focused on economics and cost control, do we have any examples of where this has been put into practice, where we can examine the types of returns? If you do this properly, if you have elasticity controls, if you are doing planning, and you get across this life cycle, and perhaps even some feedback loops, what sort of efficiencies are we talking about? What sort of cost reductions are possible?
Ashizawa: We've been working with one of our SaaS customers, who is doing more of a private-cloud type implementation. What makes this what I consider a private cloud is that they are testing various resource footprints, depending on the load level.
They're benchmarking their application at various resource footprints. For moderate levels, they have a certain footprint in mind, and then for their peak usage, during the holiday season, they have an expanded footprint in mind. The idea here is that, they want to make sure they are provisioned correctly, so that they are optimizing their cost correctly, even in their private cloud.
Moderate and peak usage
We have used our elastic testing framework, driven by Performance Center, to do both moderate levels and peak usage. When I say peak usage, I mean thousands and thousands of virtual users. What we allow them to do is that true elasticity planning.
They've been able to accomplish a couple of things. One, they understand what benchmarks and resource footprints they should be using in their private cloud. They know that they are provisioned perfectly at various load levels. They know that, because of that, they're getting all of the cost benefits of their private cloud At the end of the day, they're mitigating their business risk by ensuring that their application is going to scale to their global cost scale to support their holiday season.
Gardner: And, they're going to be able to scale, if they use cloud computing, without necessarily having to roll out more servers with a forklift. They could find the fabric either internally or with partners, which, of course, has a great deal of interest from the bean counter side of things.
Ashizawa: Exactly. Now, we're starting to relay this message and target customers that have deployed applications in the public cloud, because we feel that the public cloud is where you may fall into that trap of spinning up more resources when performance problems occur, where you might not get the return on your investment.
So as more enterprises migrate to the cloud and start sourcing there, we feel that this elasticity planning with Cloud Assure for cost control is the right way to go.
Gardner: Also, if we're billing people either internally or through these third-parties on a per-use basis, we probably want to encourage them to have a robust application, because to spin up more instances of that application is going to cost us directly. So, there is also a built-in incentive in the pay-per-use model toward these more tuned, optimized, and planned-for cloud types of application.
Ashizawa: You said it better than I could have ever said it. You used the term pay-per-use, and it’s all about the utility-based pricing that the cloud offers. That’s exactly why this is so important, because whenever it’s utility based or pay-per-use, then that introduces this whole notion of variable cost. It’s obviously going to be variable, because what you are using is going to differ between different workloads.
So, you want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price.
Gardner: Neil, is this something that’s going to be generally available in some future time, or is this available right now at the end of 2009?
Ashizawa: It is available right now.
Gardner: If people were interested in pursuing this concept of elasticity planning, of pursuing Cloud Assure for cost benefits, is this something that you can steer them to, even if they are not quite ready to jump into the cloud?
Ashizawa: Yes. If you would like more information for Cloud Assure for cost control, there is a URL that you can go to. Not only can you get more information on the overall solution, but you can speak to someone who can help you answer any questions you may have.
Gardner: Let's look to the future a bit before we close up. We've looked at cloud assurance issues around security, performance, and availability. Now, we're looking at cost control and elasticity planning, getting the best bang for the buck, not just by converting an old app, sort of repaving an old cow path, if you will, but thinking about this differently, in the cloud context, architecturally different.
What comes next? Is there another shoe to fall in terms of how people can expect to have HP guide them into this cloud vision?
Ashizawa: It’s a great question. Our whole idea here at HP and HP Software-as-a-Service is that we're trying to pave the way to the cloud and make it a smoother ride for enterprises that are trying to go to the cloud.
So, we're always tackling the main inhibitors and the main obstacles that make it more difficult to adopt the cloud. And, yes, where once we were tackling security, performance, and availability, we definitely saw that this idea for cost control was needed. We'll continue to go out there and do research, speak to customers, understand what their other challenges are, and build solutions to address all of those obstacles and challenges.
Gardner: Great. We've been talking about moving from traditional capacity planning towards elasticity planning, and a series of announcements from HP around quality and cost controls for cloud assurance and moving to cloud models.
To better understand these benefits, we've been talking with Neil Ashizawa, manager of HP's SaaS Products and Cloud Solutions. Thanks so much, Neil.
Ashizawa: Thank you very much.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.
Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
You may also be interested in: