Showing posts with label Cloud Assure. Show all posts
Showing posts with label Cloud Assure. Show all posts

Thursday, June 10, 2010

Shoemaker on How HP CSA Aids Total Visibility into Services Management Lifecycle for Cloud Computing

Transcript of a BriefingsDirect podcast on overcoming higher levels of complexity in cloud computing through improved management and automation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on gaining total visibility into the IT services management lifecycle.

As cloud computing in its many forms gains traction, higher levels of management complexity are inevitable for large enterprises, managed service providers (MSPs), and small-to-medium sized businesses (SMBs). Gaining and keeping control becomes even more critical for all these organizations, as applications are virtualized and as services and data sourcing options proliferate, both inside and outside of enterprise boundaries.

More than just retaining visibility, however, IT departments and business leaders need the means to fine-tune and govern services use, business processes, and the participants accessing them across the entire services lifecycle. The problem is how to move beyond traditional manual management methods, while being inclusive of legacy systems to automate, standardize, and control the way services are used.

We're here with an executive from HP to examine an expanding set of Cloud Service Automation (CSA) products, services, and methods to help enterprises exploit cloud and services values, while reducing risks and working toward total management of all systems and services.

Please join me now in welcoming Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP. Welcome to BriefingsDirect, Mark.

Mark Shoemaker: Hi, Dana. How are you today? I'm really excited about being able to join you.

Gardner: Mark, tell me how we got here. How did complexity become something now spanning servers, virtualization, cloud, and sourcing options? It seems like we’ve been on a long journey and we haven’t necessarily kept up.

Shoemaker: It’s simple. Up until a few years ago, everything in the data center and infrastructure had a physical home, for the most part. Then, virtualization came along. While we still have all the physical elements, now we have a virtual and a cloud strata that actually require the same level of diligence in management and monitoring, but it moves around.

Where we're used to having things connected to physical switches, servers, and storage, those things are actually virtualized and moved into the cloud or virtualization layer, which makes the services more critical to manage and monitor.

Gardner: How are clouds different? Do you need to manage them in entirely different way, or is there a way to do both -- manage both the cloud and your legacy system?

All the physical things

Shoemaker: Enterprises have to do both. Cloud doesn’t get rid of all the physical things that still sit in data centers and are plugged in and run. It actually runs on top of that. It actually adds a layer, and companies want to be able to manage the public and private side of that, as well as the physical and virtual. It just improves productivity and gets better utilization out of the whole infrastructure footprint.

Gardner: And what is it about moving toward automation, perhaps using standards increasingly, that becomes more critical than ever?

Shoemaker: Well, it’s funny. A lot of IT people will tell you we’ve always been talking about standards. It’s always been about standards, but they've not always had the choice.

A lot of times, the business definition of what it took to be successful and what business applications they needed to run that, dictated a lot of the infrastructure that sits in our data centers today. With cloud computing -- and the automation and virtualization that goes along with that -- standardization is key.

You can’t automate a repetitive task, if it’s changing all the time. The good thing about cloud and virtualization is that they're absolutely driving standards, and IT is going to benefit from that. The challenge is that now it's more fluid and we’ve got to do a better job than we’ve ever had to of managing, monitoring, and keeping up.

The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.



Gardner: What is it about the human management, the sort of manual approach, that doesn’t scale in this regard?

Shoemaker: IT has been under the gun for a few years now. I don’t know many IT shops that have added people and resources to keep up with the amount of technology they have deployed over the last few years. Now, we're making that more complex.

They aren't going to get more heads. There has to be a system to manage it. Plus, even the best people, when it’s in the middle of the night, you're tired and you’ve been up a long time trying to get something done, you're always at the risk of making a mistake on a keyboard or downloading the wrong file or somebody missing a message that they need to see.

Any time we can take the mundane and the routine up to let our high-value assets really focus on the business critical functions, that’s going to be a good thing. The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.

Gardner: I suppose too that organizations have had in the past the opportunity to control what goes on in their organization, but as you start acquiring services, you don’t really have control as to what’s going on behind the support of those services. So, we need to have management that elevates to a higher abstraction.

Shoemaker: That’s a great point and that’s one of the things we’ve looked at as well. Certainly, there is no silver bullet for either one of these areas. We're looking at a more holistic and integrated approach in the way we manage. A lot of the things we're bringing to bear -- CSA, for example -- are built on years of expertise around managing infrastructures, because it’s the same task and functions.

Ensuring the service level

Then, we’ve expanded those as well to take into account the public cloud need of being a consumer of the service, but still being concerned with the service levels, and been able to point those same tools back into a public cloud to see what’s going on and making sure you are getting what you are paying for and what the business expects.

Gardner: You have a pretty good understanding of the problem set. What about the solution from a high level? How do you start managing to gain the full visibility and also be able to control to turn those dials and govern throughout this ecosystem?

Shoemaker: You’ve hit on my two favorite words. When we talk about management, it starts with visibility and control. You have to be able to see everything. Whether it’s physical or virtual or in a cloud, you have to be able to see it and, at some point, you have to be able to control its behavior to really benefit.

Once you marry that with standards and automation, you start reaping the benefits of what cloud and virtualization promise us. To get to the new levels of management, we’ve got to do a better job.

Gardner: We’ve looked at the scale of the problem. Lets look at the scale of the solution. This isn’t something that you can buy out of a box. Tell me what HP brings in terms of its breadth and scope that have a direct relationship to the scope and breadth of the solution itself.

Nobody does that. There’s not one product and there’s not going to be one product for any period of time.



Shoemaker: Again, there is no silver bullet here. There is no one application. It’s going to take you all the way from the planning phase, to development, to testing and load testing, to infrastructure as a service (IaaS). You stand at the hardware and start building the management pieces and the platform that provide the underlying application that you develop on and then run and assure that service for whoever your consumer is.

Nobody does that. There’s not one product and there’s not going to be one product for any period of time. We'd love to get there and certainly we're going to do everything we can to make it easier.

The great thing about what HP brings to the table is that in every one of those areas I mentioned, there is an industry-leading solution that we're integrating to give you that control across your entire breadth of management that you need to be successful in today’s new infrastructure, which is cloud and virtualization on top of physical.

Gardner: Back on May 11, HP had a fairly large set of news releases, the delivery of some new products, as well as some vision, and the CSA products and services. Perhaps you could give us a little bit of an idea of the philosophy behind CSA and how that fits into this larger set of announcements.

Listened to customers

Shoemaker: CSA is the product of several years of actually delivering cloud. Some of the largest cloud installations out there run on HP software right now. We listened to what our customers would tell us and took a hard look at the reference architecture that we created over those years that encompassed all these different elements that you could bring to bear in a cloud and started looking, how to bring that to market and bring it to a point where the customer can gain benefit from it quicker.

We want to be able to come in, understand the need, plug in the solution, and get the customer up and running and managing the cloud or virtualization inside that cloud as quickly as possible, so they can focus on the business value of the application.

The great thing is that we’ve got the experience. We’ve got the expertise. We’ve got the portfolio. And, we’ve got the ability to manage all kinds of clouds, whether, as I said, it’s IaaS or platform as a service (PaaS), that your software's developed on, or even a hybrid solution, where you are using a private cloud along with a public cloud that actually bursts up, if you don’t want to outlay capital to buy new hardware.

We have the ability, at this point, to tap into Amazon’s cloud and actually let you extend your data center to provide additional capacity and then pull it back in on a per-use basis, connected with the rest of your infrastructure that we manage today.

The other cloud that we are talking about is a combination of physical and virtual. Think about a solution that maybe didn’t fit well in a virtual or a cloud environment -- databases, for example, high IO databases. We would be able to bridge the physical and the virtual, because we manage, maintain, and build with the same tool sets on the physical and virtual side.

A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical.



Gardner: I mentioned earlier that these are the same problems that large enterprises, managed service providers, even SMBs that are looking toward outsourcing services are all facing. Is there like a low-lying fruit here, a place to start across these different types of organizations or maybe specific to them? Where do you start applying the management in this sort of total sense?

Shoemaker: Again, it goes back to visibility and control. A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical. They have a very large physical presence as well. Most of them are using a disparate set of tools to try to manage all those different silos of data.

The first thing is to gain that visibility and control by bringing in one solution that can help you manage all of your servers, network, and storage as one unit, whether physical or virtual. Then, move all of your day-to-day task via automation into that system to take the burden off of your IT up schemes.

Gardner: If we make this approach either through standards or standard methodologies and implementations or references as both the service provider and the enterprise, does that give us some sort of a whole greater than the sum of the parts when it comes to management?

Shoemaker: Yeah, I think so. Certainly, from a scale and utilization perspective, we definitely have more synergies, if we are acting as one. So the ability to move things around, the ability to make sure all of the standards are being upheld, things that are being built or being built in the standards, and having that assurance of being able to see all of these different compliance issues for them become problems.

Gardner: Okay, so should enterprises be asking their managed service providers (MSPs) about the management they are using?

Shoemaker: Absolutely. If you are looking at an MSP, that MSP should be able to give you the same visibility and control that you have internally.

Gardner: From the May 11 news, give us a little recap about what you came to the market with in CSA. Is this product and services or just products? How does the mix fit?

Best in class

Shoemaker: We announced CSA on May 11, and we're really excited about what it brings to our customers. What we are able to do is bring our best-in-class, industry-leading products together and build a solution that allows you to control, build, and manage a cloud.

We’ve taken the core elements. If you think about a cloud and all the different pieces, there is that engine in the middle, resource management, system management, and provisioning. All those things that make up the central pieces are what we're starting with in CSA.

Then, depending on what the customer needs, we bolt on everything around that. We can even use the customers’ investments in their own third-party applications, if necessary and if desired.

Gardner: Let’s look at some examples. I'm interested in understanding this concept of total management, the visibility to control across physical, virtual, and various cloud permutations. Give me an idea of how this physical to virtual scenario works and how different types of applications, maybe transactional and web services based ones, can benefit.

Shoemaker: As I mentioned before, one of the examples we use is a database, a high IO database with lot of reads and writes. That may not be best suited for a cloud or virtual environment, where the web service front-end and the middle layer may be fine.

So, it goes back to singular visibility and that singular control point to manage your cloud and your physical.



Because we use the same management suite to manage the physical and the virtual, we were able to mesh those two systems to create a singular system that’s managed and looks like one system, but actually sits in the physical in the virtual realm. The customer doesn’t have to bring all of the applications back into a physical element and not get deficiencies that cloud has for the pieces that don’t need it, just to satisfy the database need.

Gardner: Is there a second use case or environment in which this total management benefit also fits in?

Shoemaker: Let’s say it's a customer, an MSP customer in this case or a customer that’s turning up new physical cloud elements. The VMWare ESX server still has to be built on a physical server. With our solution, we are able to actually build that ESX server, based on a pre-defined set of criteria, image that server onto the physical hardware, and bring it into the environment, with the same suite of tools. So, it goes back to singular visibility and that singular control point to manage your cloud and your physical.

Gardner: And is that important perhaps for regulatory or compliance issues?

Shoemaker: Absolutely. Virtual and physical have the same compliance with regulatory requirements. Virtual and cloud probably have a little bit more difficult time just based on the shared environment that’s naturally occurring. A lot of emphasis is being put on the security elements in cloud today. So, the compliance piece of what we offer actually reduces that risk for our customers.

Gardner: How about the deployment choices movement? As organizations experiment with cloud, perhaps they start moving development, and ultimately workloads, out to a third-party cloud. How do you manage that transition? I guess this is the hybrid cloud management problem.

As cloud takes off

Shoemaker: We talked a little bit earlier about some of the work we’ve done around some of the Cloud Assure products, where we can help expand cloud infrastructure into a public environment. We see that becoming more prevalent as cloud takes off.

Right now, a lot of people experiment with development and test, much like they did in the virtual initial start-up period. We see that relationship becoming more of a broker relationship, where you may pick where you put your application to run in that public cloud. Build it in-house in the private and move it out into that public realm.

Think about this: A lot of countries have different regulatory controls, laws, and regulations around where data can be stored. If you're doing business in some European countries, they want you to have the actual service running inside the country, so the data stays in there.

In the past, they'd find an MSP in that country, building all the infrastructure, and managing everything that goes along with that. So, as the country of record, the data has to be there. Now, we have the ability to actually create that image in the cloud, push that image to a cloud provider in that country, and have that application run holistically on premise inside of the borders of the country, but still report back to the larger piece. This gets us around a hurdle that’s been a challenge with physical infrastructure.

Gardner: Let’s take a look to the future. As companies will be approaching cloud from a variety of perspectives, there are different vertical industries involved, and different geographies. It's kind of a mess, a stew of different approaches. What do you think is going to happen in the future? I think cost and competitive issues are going to drive companies to try to do this. They're going to hit this speed bump about management. Where do you see HP’s offerings going in order to help them address that?

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm?



Shoemaker: In a lot of cases, HP’s offerings are already there and many are aspects of the functionality. Certainly, we're working hard to make sure we integrate the solutions, so they act together more cohesively and provide more value to our customers from day one.

As the landscape changes, we're looking at how to change our applications as well. We’ve got a very large footprint in the software-as-a-service (SaaS) arena right now where we actually provide a lot of our applications for management, monitoring, development, and test as SaaS. So, this becomes more prevalent as public cloud takes off.

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm.

Gardner: Are there ways of getting started? Are there resources, places online that folks can go to for gearing up for that future?

Shoemaker: There's a robust cloud community out there today, but HP also has a robust practice around helping our customers plan for those exact things. Our Services group provides workshops, learning engagements, and even planning and execution help for a lot of our largest customers today that are planning and positioning for tomorrow. So, we have that expertise and we're actually actively supporting our customers today.

Gardner: We’ve been talking about gaining total visibility into services management lifecycle. We're looking at this through the movement from virtualized to services and sourcing options. We’ve been talking with an HP executive about Cloud Service Automation products and services and how, in the future, total governance is going to become more the norm and more a necessity, as organizations try to avail themselves of more cloud and IT shared services opportunities.

I want to thank Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP. Thanks for joining, Mark.

Shoemaker: Thanks so much, Dana. I appreciate you having us on.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Transcript of a BriefingsDirect podcast on overcoming higher levels of complexity in cloud computing through improved management and automation. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, April 13, 2010

Fog Clears on Proper Precautions for Putting More Enterprise Data Safely in Clouds

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please join me in welcoming Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP. Welcome back, Christian.

Christian Verstraete: Thank you.

Gardner: We’re also here with Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Welcome back to the show, Archie.

Archie Reed: Hey, Dana. Thanks.

Gardner: It strikes me that companies around the world are already doing a lot of their data and applications activities in what we could loosely call "cloud computing," cloud computing being a very broad subject and the definition being rather flexible.

Let me take this first to you, Archie. Aren’t companies already doing a lot of cloud computing? Don’t they already have a great deal of transactions and data that’s being transferred across the Web, across the Internet, and being hosted on a variety of either internal or external servers?

Difference with cloud

Reed: I would certainly agree with that. In fact, if you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Gardner: Christian, it seems as if we have more organizations that are saying, "We can provide cloud services," even though those services have been things that have been done for many years by other types of companies. But we also have enterprises seeking to do more types of applications and data-driven activities via these cloud providers.

So, we’re expanding the universe, if you will, of both types of people involved with providing cloud services and types of data and applications that we would use in a cloud model. How risky is it, from your perspective, for organizations to start having more providers and more applications and data involved?

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Gardner: Archie, it sounds as if we’re in a game of catch-up, where the enticements of the benefits of cloud computing have gotten ahead of the due diligence and managing of the complexity that goes along with it. If you subscribe to that, then perhaps you could help us in understanding how we can start to close that gap.

People are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.



To me one recent example was at the RSA Conference in San Francisco, the Cloud Security Alliance (CSA) came out with a statement that said, "Here’s what we have to do, and here are the steps that need to be taken." I know that HP was active in that. Tell me if you think we have a gap and how the CSA thinks we can close it.

Reed: We’re definitely in a situation where a number of folks are rushing toward the cloud on the promise of cost savings and things like that. In fact, in some cases, people are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.

A prime example of this was just last week, a week after the RSA Conference, the General Services Administration (GSA) here in the U.S. actually withdrew a blanket purchase order (BPO) for cloud computing services that they had put out only 11 months before.

They gave two reasons for that. The first reason was that technology had advanced so much in that 11 months that their original purchase order was not as applicable as it was at that time. But the second reason, perhaps more applicable to this conversation, was that they had not correctly addressed security concerns in that particular BPO.

Take a step back

In that case, it shows we can rush toward this stuff on promises, but once we really start to get into the cloud, we see what a mess it can be and we take a step back. As far as the CSA, HP was there at the founding. We did sponsor research that was announced at RSA around the top threats to cloud computing.

We spoke about what we called the seven deadly sins of cloud. Just fortuitously we came up with seven at the time. I will point out that this analysis was also focused more on the technical than on specific business risk. But, one of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of the CSA was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Gardner: Correct me if I am wrong, but one of the points that the CSA made was the notion that, by considering cloud computing environments and methodologies and scenarios, you can actually make your general control and management of data improved by moving in this direction. Do you subscribe to that?

Reed: Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

So this seems to make sense, whether you're talking to a CEO, CIO, or a developer. And it also makes sense, whether you are talking about internal resources or going to the cloud. Does that makes sense?

Gardner: Sure, it does. So getting it right means that you have more options in terms of what you can do in IT?

Reed: Absolutely.

Gardner: That seems like a pretty obvious direction to go in. Now, Christian, we talked a little bit about the technology standards methods for approaching security and data protection, but there is more to that cloud computing environment. What I'm referring to is compliance, regulation, and local laws. And this strikes me that there is a gap -- maybe even a chasm -- between where cloud computing allows people to go, above where the current laws and regulations are.

Perhaps you could help us better understand this gap and what organizations need to consider when they are thinking about moving data to the cloud vis-a-vis regulation.

A couple of caveats

Verstraete: Yes, it's actually a very good point. If you really look at the vision of the cloud, it's, "Don't care about where the infrastructure is. We'll handle all of that. Just get the things across and we'll take care of everything."

That sounds absolutely wonderful. Unfortunately, there are a couple of caveats, and I'll take a very simple example. When we started looking at the GS1 Product Recall service, we suddenly realized that some countries require information related to food that is produced in that country to remain within the country's boundaries.

That goes against this vision of clouds, in which location becomes irrelevant. There are a lot of examples, particularly around privacy aspects and private information, that makes it difficult to implement that complete vision of dematerialization, if I can put it that way, of the whole power that sits behind the cloud.

Why? Because the EU, for example, has very stringent rules around personal data and only allows countries that have similar rules to host their data. Frankly, there are only a couple of countries in the world, besides the 27 countries of the EU, where that's applicable today.

This means that if I take an example, where I use a global cloud with some data centers in the US and some data centers in Europe, and I want to put some private data in there, I may have some issues. How does that data proliferate across the multiple data centers that service actually uses? What is the guarantee that all of the data centers that will host and contain my data and its replication and these backups and others are all within the geographical boundaries that are acceptable by the European legislation?

The bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here.



I'm just taking that as an example, because there is other legislation in the US that is state-based and has the same type of approach and the same type of issues. So, on the one hand, we still are based with a very local-oriented legislative body and we are there with a globally oriented vision for cloud. In one way, form, or shape we'll have to address the dichotomy between both for the cloud to really be able to take off from a legal perspective.

Reed: Dana, if I may, the bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here. One of the ways in which I would recommend folks consider this -- when you start talking about data loss, data protection and that sort of stuff -- is having a data-classification approach that allows you to determine or at least deploy certain logic and laws and thinking how you're going to use it and in what way.

If you go to the military, the government, public sector, education, and even energy, they all have very structured approaches to the data that they use. That includes understanding how this might be used by third parties and things like that. You also see some recent stuff.

Back in 2008, I think it was, the UK came up with a data handling review, which was in response to public sector data breaches. As a result, they released a security policy framework that contains guidance and policies on security and risk management for the government departments. One of the key things there is how to handle data, where it can go, and how it can be used.

Trying to streamline

What we find is that, despite this conflict, there are a lot of approaches that are being put into play. The goal of anyone going into this space, as well as what we are trying to promote with the CSA, is to try to streamline that stuff and, if possible, influence the right people that are trying to avoid creating conflicting approaches and conflicting classification models.

Ultimately, when we get to the end of this, hopefully the CSA or a related body that is either more applicable or willing will create something that will work on a global scale or at least as widely as possible.

Gardner: So, for those companies interested in exploring cloud it's by no means a cakewalk. They need to do their due diligence in terms of technology and procedures, governance and policies, as well as regulatory issues compliance and, I suppose you could call it, localization types of issues.

Is there a hierarchy that appears to either of you about where to start in terms of what are the safe types of data, the safer or easier types of applications, that allows you to move toward some of these principles that probably are things you should be doing already, but that allow you to enjoy some of the rewards, while mitigating the risks?

Reed: There are two approaches there. One of the things we didn't say at the outset was there are a number of different versions of cloud. There are private clouds and public clouds. Whether you buy into private cloud as a model, in general, the idea there is you can have more protections around that, more controls, and more understanding of where things are physically.

If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.



That's one approach to understanding, or at least achieving, some level of protection around the data. If you control the assets, you're allowed to control where they're located. If you go into the public cloud, then those data-classification things become important.

If you look at some of the government standards, like classified, restricted, or confidential, once you start to understand how to apply the data models and the classifications, then you can decide where things need to go and what protections need to be in place.

Gardner: Is there a progression, a logical progression, that appears to you about how to approach this, given that there are still disparities in the field?

Reed: Sure. You start off with the simplest classification of data. If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, that data classification that Archie was just talking about is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

Gardner: Perhaps there is an instance that will illustrate what we're talking about. I hear an awful lot about platform as a service (PaaS), which is loosely defined as doing application development activities in a cloud environment. I talk to developers who are delighted to use cloud-based resources for things like testing and to explore and share builds and requirements in the early stages.

At the same time, they're very reluctant to put source code in someone else's cloud. Source code strikes me as just a form of data. Where is the line between safe good cloud practices and application development, and when would it become appropriate to start putting source code in there as well?

Combination of elements

Verstraete: There are a number of answers to your question and they're related to a combination of elements. The first thing is gaining an understanding as much as you can, which is not easy, of what are the protection mechanisms that fit in the cloud service.

Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Gardner: Alright, Archie.

Reed: There are a couple of things or points that need to be made. First off, when we think about things like source code or data like that, there is this point where data is stored and it sits at rest. Until you start to use it, it has no impact, if it's encrypted, for example.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.



So, if you're storing source code up there, it's encrypted, and you hold the keys, which is one of the key tenets that we would advocate for anyone thinking about encrypting stuff in the cloud. then maybe there is a level of satisfaction and meeting compliance that you have with that type of model.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.

The second thing is that we're also seeing a very nascent set of controls and guarantees and SLAs and those sorts of things. This is very early on, in my opinion and in a lot of people's opinion, in the development of this cloud type environment, looking at all these attributes that are given to cloud, the unlimited expansion, the elasticity, and rapid provisioning. Certainly, we can get wrapped around the axle about what is really required in cloud, but it all ultimately comes down to that risk analysis.

If you have the right security in the system, if you have the right capabilities and guarantees, then you have a much higher level of confidence about putting data, such as source code or some sets of data like that, into the cloud.

Gardner: To Christian’s point of that the publicly available cloud providers are basically saying buyer beware, or in this case, the cloud practitioner beware, the onus to do good privacy, security compliance, and best practices falls back on the consumer, rather than the provider.

Community clouds

Reed: That's often the case. But, also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Gardner: So, for that cloud practitioner, as an organization, if they take those steps towards good cloud computing practices and technologies, it’s probably going to benefit them across the board in their IT infrastructure, applications, and data activities. But does it put them at a competitive advantage?

What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.



If you do this right, if you take the responsibility yourself to figure out the risks and rewards and implement the right approach, what does that get for you? Christian, what’s your response to that?

Verstraete: It gives you the capability to use the elements that the cloud really brings with it, which means to have an environment in which you can execute a number of tasks in a pay-per-use type environment.

But, to come back to the point that Archie was making, one of the things that we often have a tendency to forget -- and I'm as guilty as anybody else in that space -- is that cloud means a tremendous amount of different things. What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.

The more you move away from the traditional public cloud -- and when I say the traditional public cloud, I’m thinking about Amazon, Google, Microsoft, that type of thing -- to more community clouds and private clouds, the more important that you have it under your own control to ensure that you have the appropriate security layers and security levels and appropriate compliance levels that you feel you need for the information you’re going to use, store, and share in those different environments.

Gardner: Okay, Archie, we’re about out of time, so the last question is to you and it’s going to be the same question. If you do this well, if you do it right, if you take the responsibility, perhaps partner with others in a community cloud, what do you get, what’s the payoff, why would that be something that’s a competitive advantage or cost advantage, and energy advantage?

Beating the competition

Reed: We’ve been through a lot of those advantages. I’ve mentioned several times the elasticity, the speed of provisioning, the capacity. While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.

Gardner: Very good. We’ve been discussing how to manage risks and rewards and proper placement of enterprise data in cloud-computing environments. I want to thank our two panelists today, Christian Verstraete, Chief Technology Officer for Manufacturing and Distribution Industries Worldwide at HP. Thank you, Christian.

Verstraete: You’re welcome.

Gardner: And also, Archie Reed, HP's Chief Technologist for Cloud Security, and the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Thank you, Archie.

Reed: Hey, Dana. Thanks for taking the time to talk to us today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining us, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, December 21, 2009

HP's Cloud Assure for Cost Control Takes Elastic Capacity Planning to Next Level

Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.

Traditional capacity planning is not enough in cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.

We'll look at how to best right-size applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.

We'll also examine how quality control for these applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.

To unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. Welcome to BriefingsDirect, Neil.

Neil Ashizawa: Thanks very much, Dana.

Gardner: As we've been looking at cloud computing over the past several years, there is a long transition taking place of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.

Traditional capacity planning therefore needs to be refactored and reexamined. Tell me, if you could, Neil, why capacity planning, as people currently understand it, isn’t going to work in a cloud environment?

Ashizawa: Old-fashioned capacity planning would focus on the peak usage of the application, and it had to, because when you were deploying applications in house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes, with long procurement cycles, you'd have to plan for that.

In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.

Elasticity planning

The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low level usage case as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.

Gardner: So, this isn’t just a matter of spinning up an application and making sure that it could reach a peak load of some sort. We have a new kind of a problem, which is how to be efficient across any number of different load requirements?

Ashizawa: That’s exactly right. If you were to take, for instance, the old-school capacity-planning ideology to the cloud, what you would do is provision for your peak use case. You would scale up your elasticity in the cloud and just keep it there. If you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity and paying for only what you need at that moment.

If I'm at a slow period of my applications usage, then I don’t want to be over provisioned for my peak usage. One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.

Gardner: I suppose also the holy grail of this cloud-computing vision that we've all been working on lately is the idea of being able to spin up those required instances of an application, not necessarily in your private cloud, but in any number of third-party clouds, when the requirements dictate that.

Ashizawa: That’s correct.

Gardner: Now, we call that hybrid computing. Is what you are working on now something that’s ready for hybrid or are you mostly focused on private-cloud implementation at this point?

Ashizawa: What we're bringing to the market works in all three cases. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.

Gardner: HP announced, back in the spring of 2009, a Cloud Assure package that focused on things like security, availability, and performance. I suppose now, because of the economy and the need for people to reduce cost, look at the big picture about their architectures, workloads, and resources, and think about energy and carbon footprints, we've now taken this a step further.

Perhaps you could explain the December 2009 announcement that HP has for the next generation or next movement in this Cloud Assure solution set.

Making the road smoother

Ashizawa: The idea behind Cloud Assure, in general, is that we want to assist enterprises in their migration to the cloud and we want to make the road smoother for them.

Just as you said, when we first launched Cloud Assure earlier this year, we focused on the top three inhibitors, which were security of applications in the cloud, performance of applications in the cloud, and availability of applications in the cloud. We wanted to provide assurance to enterprises that their applications will be secure, they will perform, and they will be available when they are running in the cloud.

The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.

If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.

The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.

You can find yourself in a situation where your application is no longer right-sized in the cloud, because you have over-provisioned your compute resources.



If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.

When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.

Gardner: I think we need to have more insight into the nature of the application, rather than simply throwing additional instances of the application. Is that it at a very simple level?

Ashizawa: That’s it at a very simple level. Just to make it even simpler, applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.

Gardner: Can we do this tuning with existing applications -- you mentioned Java apps, for example -- or is this something for greenfield applications that we are creating newly for these cloud scenarios?

Java and .NET

Ashizawa: Our enhancement to Cloud Assure, which is Cloud Assure for cost control, focuses more on the Java and the .NET type applications.

Gardner: And those would be existing applications or newer ones?

Ashizawa: Either. Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.

Gardner: Is this new set software, services, both? Maybe you could describe exactly what it is that you are coming to market with.

Ashizawa: Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.

Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.



The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.

When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.

All of this is backed by services that HP SaaS provides. We can perform load testing. We can set up the monitoring. We can do the code level performance diagnostics, integrate that all into one console, and help customers right-size the applications in the cloud.

Gardner: That sounds interesting, and, of course, harkens back to the days of distributed computing. We're just adding another level of complexity, that is to say, a sourcing continuum of some sort that needs to be managed as well. It seems to me that you need to start thinking about managing that complexity fairly early in this movement to cloud.

Ashizawa: Definitely. If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.

Tuning the application

The nice thing about Cloud Assure for cost control is that, if you run into performance issues after you have gone live, you can still use the service. You could come in and we could help you right-size your application and help you tune it. Then, you can start getting the global scale you wish at the right cost.

Gardner: One of the other interesting aspects of cloud is that it affects both design time and runtime. Where does something like the Cloud Assure for cost control kick in? Is it something that developers should be doing? Is it something you would do before you go into production, or if you are moving from traditional production into cloud production, or maybe all the above?

Ashizawa: All of the above. HP definitely recommends our best practice, which is to do all your elasticity planning before you go into production, whether it’s a net new application that you are rolling out in the cloud or a legacy application that you are transferring to the cloud.

Given the elastic nature of the cloud, we recommend that you get out ahead of it, do your proper elasticity planning, tune your system, and right-size it. Then, you'll get the most optimized cost and predictable cost, so that you can budget for it.

One of the side benefits obviously to right-sizing applications and controlling cost is to mitigate risk.



Gardner: It also strikes me, Neil, that we're looking at producing a very interesting and efficient feedback loop here. When we go into cloud instances, where we are firing up dynamic instances of support and workloads for application, we can use something like Cloud Assure to identify any shortcomings in the application.

We can take that back and use that as we do a refresh in that application, as we do more code work, or even go into a new version or some sort. Are we creating a virtual feedback loop by going into something like Cloud Assure?

Ashizawa: I can definitely see that being that case. I'm sure that there are many situations where we might be able to find something inefficient within the code level layer or within the database SQL statement layer. We can point out problems that may not have surfaced in an on-premise type deployment, where you go to the cloud, do your elasticity planning, and right-size. We can uncover some problems that may not have been addressed earlier, and then you can create this feedback loop.

One of the side benefits obviously to right-sizing applications and controlling cost is to mitigate risk. Once you have elasticity planned correctly and once you have right-sized correctly, you can deploy with a lot more confidence that your application will scale to handle global class and support your business.

Gardner: Very interesting. Because this is focused on economics and cost control, do we have any examples of where this has been put into practice, where we can examine the types of returns? If you do this properly, if you have elasticity controls, if you are doing planning, and you get across this life cycle, and perhaps even some feedback loops, what sort of efficiencies are we talking about? What sort of cost reductions are possible?

Ashizawa: We've been working with one of our SaaS customers, who is doing more of a private-cloud type implementation. What makes this what I consider a private cloud is that they are testing various resource footprints, depending on the load level.

They're benchmarking their application at various resource footprints. For moderate levels, they have a certain footprint in mind, and then for their peak usage, during the holiday season, they have an expanded footprint in mind. The idea here is that, they want to make sure they are provisioned correctly, so that they are optimizing their cost correctly, even in their private cloud.

Moderate and peak usage

We have used our elastic testing framework, driven by Performance Center, to do both moderate levels and peak usage. When I say peak usage, I mean thousands and thousands of virtual users. What we allow them to do is that true elasticity planning.

They've been able to accomplish a couple of things. One, they understand what benchmarks and resource footprints they should be using in their private cloud. They know that they are provisioned perfectly at various load levels. They know that, because of that, they're getting all of the cost benefits of their private cloud At the end of the day, they're mitigating their business risk by ensuring that their application is going to scale to their global cost scale to support their holiday season.

Gardner: And, they're going to be able to scale, if they use cloud computing, without necessarily having to roll out more servers with a forklift. They could find the fabric either internally or with partners, which, of course, has a great deal of interest from the bean counter side of things.

Ashizawa: Exactly. Now, we're starting to relay this message and target customers that have deployed applications in the public cloud, because we feel that the public cloud is where you may fall into that trap of spinning up more resources when performance problems occur, where you might not get the return on your investment.

So as more enterprises migrate to the cloud and start sourcing there, we feel that this elasticity planning with Cloud Assure for cost control is the right way to go.

Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price.



Gardner: Also, if we're billing people either internally or through these third-parties on a per-use basis, we probably want to encourage them to have a robust application, because to spin up more instances of that application is going to cost us directly. So, there is also a built-in incentive in the pay-per-use model toward these more tuned, optimized, and planned-for cloud types of application.

Ashizawa: You said it better than I could have ever said it. You used the term pay-per-use, and it’s all about the utility-based pricing that the cloud offers. That’s exactly why this is so important, because whenever it’s utility based or pay-per-use, then that introduces this whole notion of variable cost. It’s obviously going to be variable, because what you are using is going to differ between different workloads.

So, you want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price.

Gardner: Neil, is this something that’s going to be generally available in some future time, or is this available right now at the end of 2009?

Ashizawa: It is available right now.

Gardner: If people were interested in pursuing this concept of elasticity planning, of pursuing Cloud Assure for cost benefits, is this something that you can steer them to, even if they are not quite ready to jump into the cloud?

Ashizawa: Yes. If you would like more information for Cloud Assure for cost control, there is a URL that you can go to. Not only can you get more information on the overall solution, but you can speak to someone who can help you answer any questions you may have.

Gardner: Let's look to the future a bit before we close up. We've looked at cloud assurance issues around security, performance, and availability. Now, we're looking at cost control and elasticity planning, getting the best bang for the buck, not just by converting an old app, sort of repaving an old cow path, if you will, but thinking about this differently, in the cloud context, architecturally different.

What comes next? Is there another shoe to fall in terms of how people can expect to have HP guide them into this cloud vision?

Ashizawa: It’s a great question. Our whole idea here at HP and HP Software-as-a-Service is that we're trying to pave the way to the cloud and make it a smoother ride for enterprises that are trying to go to the cloud.

So, we're always tackling the main inhibitors and the main obstacles that make it more difficult to adopt the cloud. And, yes, where once we were tackling security, performance, and availability, we definitely saw that this idea for cost control was needed. We'll continue to go out there and do research, speak to customers, understand what their other challenges are, and build solutions to address all of those obstacles and challenges.

Gardner: Great. We've been talking about moving from traditional capacity planning towards elasticity planning, and a series of announcements from HP around quality and cost controls for cloud assurance and moving to cloud models.

To better understand these benefits, we've been talking with Neil Ashizawa, manager of HP's SaaS Products and Cloud Solutions. Thanks so much, Neil.

Ashizawa: Thank you very much.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the need to right-size and fine-tune applications for maximum benefits of cloud computing. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

You may also be interested in: