Tuesday, August 31, 2010

Explore the Myths and Means of Scaling Out Virtualization Via Automation Across Data Centers

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the improved and increased use of virtualization in data centers. We'll delve into how automation and policy-driven processes and best practices are offering a slew of opportunities for optimizing virtualization. Server, storage, and network virtualization use are all rapidly moving from points of progress into more holistic levels of adoption.

The goals are data center transformation, performance and workload agility, and cost and energy efficiency. But the trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization. Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

In some cases, levels of virtualization are stalling at 30 percent adoption, yet other data centers are leveraging automation and best practices and moving to 70 percent and even 80 percent adoption rates. By taking such a strategic outlook on virtualization, we'll see how automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

Here now to discuss how automation can help you achieve strategic levels of virtualization adoption are our guests, Erik Frieberg, Vice President of Solutions Marketing at HP Software. Welcome to BriefingsDirect, Erik.

Erik Frieberg: Great. Good to be here.

Gardner: And, we're here with Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. Welcome, Erik Vogel.

Erik Vogel: Well, thank you.

Gardner: Let's start the discussion with you Erik Frieberg. Tell me, why there is a misconception about acceptable adoption levels of virtualization out there?

Frieberg: When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

IT is in the process of serving the business, and the business is asking for whole application service provisioning. So it's not just these individual elements, but tying them all together along with middleware, databases, objects and doing this whole stack provisioning.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.

Gardner: Is there something in general that folks don't appreciate around this level of acceptable use of virtualization, or is there a need for education?

Perceptible timing

Frieberg: It comes down to what I call the difference in perceptible timing. Often, when businesses are asking for new applications or services, the response is three, four, or five weeks to roll something out. This is because you're automating individual pieces but it's still left to IT to glue all the individual element automation together to deliver that business service.

As companies expand their use of automation to automate the full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

Gardner: I see. So we're really moving from a tactical approach to a strategic approach?

Frieberg: Exactly.

Gardner: What about HP? Is there something about the way that you have either used virtualization yourselves or have worked with a variety of customers that leads you to believe that there is a lot more uptake? Are we really only in the first inning or two of virtualization from HP's perspective?

Frieberg: We're maybe in the second inning, but we're certainly early in the life cycle. We're seeing companies moving beyond the traditional automation, and their first goal, which is often around freeing up labor for common tasks.

Companies will look at things like how do they baseline what they have, how they patch and provision new services today, moving on to what is called deployment automation, and the ability to move applications from the development environment into the production environment.

They're asking how do I establish and enforce compliance policies across my organization.



You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

So we're maturing and moving to further "innings" by automating the process more and also getting further benefits around compliance and best practices for use through our automation efforts.

Gardner: When you can move in that direction, at that level, you start to really move into what we call data center transformation, rather than spot server improvements or rack-by-rack improvements.

Frieberg: Exactly. This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Gardner: Erik Vogel, are there other misconceptions that you’ve perceived in the marketplace in terms of where virtualization adoption can go?

Biggest misconception

Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Gardner: Erik Frieberg, back to you. Perhaps there is a laundry list of misconceptions that we can go through and then discount them. If we're going to go from that 30 percent into that strategic level, what are some specific things that are holding people back?

Frieberg: When I talk to customers about their use of virtualization, you're right. They virtualize the easy stuff.

The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

Gardner: Let's now get into what we mean by automation. How do you go about automating in such a way that you don't fall into these traps and you can enjoy the things that you've been describing in terms of better compliance, better process, and repeatability?

Provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT . . . is really moving beyond what a lot of people can manage.



Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate. We don’t need to provision a PC 16 different ways. We actually can do it one way with three variations. When you do that, you now move up the ability to automate that process. Then, you use that individual process automation or element automation in the larger process, and tie it all together.

That’s how we see companies or organizations moving up this maturity curve within automation.

Gardner: I was intrigued by that RBA example you gave. There are occasions where folks think they're automated, but are not. Is there a way to have a litmus test as to whether automation is where you need to go, not actually where you’ve been?

The easy aspects

Frieberg: Automation is similar to the statistics you gave in virtualization, where people are exploring automation and they're automating the easy aspects, but they're hitting roadblocks in understanding how they can drive automation further in their organization.

Something I have used as a litmus test is that run book. How thick is it now and how thick was it a month ago or a year ago, when you started automation? How have you consolidated it through your automation processes?

We see companies not just trying to standardize, consolidate, or make tough choices that will enable them to push the automation further. A lot of it is just a hard-held belief of what can be automated in IT versus what can't. It's very analogous to them approaching virtualization -- I can do these types of workloads, but not these others. A lot of these beliefs are held in old facts and not based on what the technology or new software solutions could do today.

Gardner: So, perhaps an indication of where they are actually doing automation is is that run book is getting smaller?

Frieberg: Exactly. The other thing I look at, as companies start to roll out applications, is not just the automation, but the consistency. You read different facts within the industry. Fifty percent of the time, when you make a change into your environment, you cause an unforeseen downstream effect. You change something, but something else breaks further down.

They look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.



When you automate processes, we tend to see that drop dramatically. Some estimates have put the unforeseen impact as low as five percent. So, you can also measure your unforeseen downstream effects and ask, "Should I automate these processes that seem to be tedious, time-consuming, and non-compliant for people to do, and can I automate them to eliminate these downstream effects, which I am trying to not have occur in my organization?"

Gardner: Erik Vogel, when these folks recognize that they need to be more aggressive with automation in order to do virtualization better, enjoy their cost performance improvements, and ultimately get towards their data center transformation, what is it that they need to be thinking of? Are performance and efficiency the goals? How do we move toward this higher level of virtualization?

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the impediments, unfortunately, is that there is kind of an economic hold. The way we're creating these business cases today doesn't show the true value and benefit of enhanced virtualization automation. We need to rethink the way we put these business cases together to really incorporate a lot of the bigger benefits that we're seeing with clients who have moved to a higher percentage of virtualization.

Gardner: In order to attain that business benefit to make the investment a clear winner and demonstrate the return what is it that needs to happen? Is this a best-of-breed equation, where we need to pull together the right parts? Is it the people equation about the operations, or all of the above? And how does HP approach that stew of different elements within this?

All of the above

Vogel: It's really all of the above. One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

If we're just looking at virtualizing and just moving from physical to virtual devices, we may be reducing our asset footprint and gaining the benefits of just managing fewer physical assets. From a logical standpoint, we still have the same number of servers and the same number of OS instances. So, we still have the same amount of complexity in managing the environment.

The benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.

But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.



Gardner: It sounds like we're moving beyond that tactical benefit of virtualization, but thinking more about an operational fabric through which to support a variety of workloads -- and that's quite a leap.

Vogel: Absolutely. In fact, when we start talking about moving to a cloud-type environment, specifically within public cloud and private cloud, we're looking at having to do that process work and governance work. It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. And then, how do we de-provision, when we're done with a particular environment; and how do we enable that environment to scale up and scale down, based on the demands of the workloads that are being run on that environment.

So, it's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

Gardner: And yet, it's at that higher level of adoption that the really big paybacks kick in. Are there some misconceptions or some education issues that are perhaps holding companies back from moving toward that larger adoption, which will get them, in fact, those larger economic productivity and transformative benefits?

Lifecycle view

Vogel: The biggest challenge where education needs to occur is that we need to be looking at IT through a lifecycle view. A lot of times we get tied up just looking at an initial investment or what the upfront cost would be to deploy one of these environments. We're not truly looking at the cost to provide that service over a three-, four- or five-year period, because if we start to look carefully at what that lifecycle cost is, we can see that these shared environments, these virtualized environments with automation, are a fraction of the cost of a dedicated environment.

Now, there will need to be an upfront investment. That, I think, is causing a lot of concern for our clients because they look at it only in the short-term. If we look at it over a life-cycle approach and we educate clients to start seeing the cost to provide that service, that's when we start to see that it's easy to make a business case for moving to one of these environments.

It's a change in the way a lot of our clients think about developing business cases. It's a new model and a new way of looking at it, but it's something that's occurring across the industry today, and will continue to occur.

Gardner: I'm curious about the relationship that you’re finding, as the adoption levels increase from net 30 percent to 60 or 70 percent. Are the benefits coming in on a linear basis as a fairly constant improvement? Or is there some sort of a hockey-stick effect, whereby there is an accelerating level of business benefits as the adoption increases?

Vogel: It really depends on the client situation, the type of applications, and their specific environment. Generally, we're still seeing increasing returns in almost a linear fashion, as we move into 60-70 percent virtualized.

Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment.



As we move beyond that, it is client-specific and client-independent. There are a lot of variables and a lot of factors in play, such as the type of applications that are running on it and the type of workloads and demands that are being placed on that environment. Depending on the clients, they can still see benefits when they're 80-85 percent virtualized. Other clients will hit that economic threshold in the 60-65 percent virtualized range.

We do know that we're continuing to see benefits beyond that 30 percent, beyond the easy stuff, as they move into Tier 1 applications. Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment. As applications continue to modernize and are better able to use virtual technologies, we'll see that threshold continue to increase into the 80-85 percent range.

Gardner: How about the type of payoff that might come as companies move into different computing models? If you have your sights set on cloud computing, private cloud or hybrid cloud at some point, will you get a benefit or dividends from whatever strategic virtualization, governance and policy, and automation practices you may inherit now?

Vogel: I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Now, with that said, we've also seen instances where costs have actually increased as a result of cloud implementation, and that's generally because the governance that was required was not in place. If you move to a virtual environment that's highly automated and you make it very easy for a user to provision in a cloud-type model and you don’t have correct governance in place, we have actually seen virtual server sprawl occur.

Everything pops up

All of a sudden, everybody starts provisioning environments, because it's so easy and everything is in this cloud environment begins to pop up, which results in increased software licensing costs. Plus, we still need to manage those environments.

Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Gardner: So, there is a relationship between governance and automation. You really wouldn’t advise having them separate or even consecutive? They really need to go hand in hand?

Vogel: Absolutely. We've found in many, many client instances, where they've just gone out, procured hardware, and dropped it on the floor, that they did not realize the benefits they had expected from that cloud-type hardware. In order to function as a cloud, it needs to be managed as a cloud environment. That, as a prerequisite, requires strong governance, strong process, security controls, etc. So, you have to look at them together, if you're really going to operationalize a cloud environment, and by that I mean really be able to achieve those business benefits.

Gardner: Erik Frieberg, tying this back to data-center transformation, is there a relationship now that's needed between the architectural level and the virtualization level, and have they so far been distinct?

I guess I'm asking you the typical cultural question. Are the people who are in charge of virtualization and the people who are in charge of data center transformation the same people talking the same language? What do they need to do to make this more seamless?

When you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.



Frieberg: I’ll echo something Erik said. We hear clients talk about how it's not about virtualizing the server, but it's about virtualizing the service. This is where we look at virtualizing a single server and putting it into production by cloning it is relatively straightforwardly. But, when you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.

You get server people involved around provisioning. You’ve got network people. You’ve got storage people. Now, you're just talking about the infrastructure level. If you want to put app servers or database servers on top of this, you have those constituents involved, DBAs and other people. If you start to put production-level applications on there, you get application specialists.

You're now talking about almost a dozen people involved in what it takes to put a service in production, and if you're virtualizing that service, you have admins and others involved. So, you really have this environment of all these people who now have to work together.

A lot of automation is by automating specific tasks. But, if you want to automate and virtualize this entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.

The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So, there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.

Gardner: Erik Vogel, it sounds as if this allows for a maturity benefit, or a sense of maturity around these virtualization benefits. This isn’t a one-off. This is a repeatable, almost a core, competency. Is that how you are seeing this develop now? A company should recognize that you need to do virtualization strategically, but you need to bake it in. It's something that's not going to go away?

Capability standpoint

Vogel: That's absolutely correct. I always tend to shy away from saying maturity. Instead, I like to look at it from a capability standpoint. When we look at just maturity, we see organizations that are very mature today, but yet not capable of really embracing and leveraging virtualization as a strategic tool for IT.

So, we've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components. It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.

Gardner: Erik Frieberg, moving toward this category of being a capability rather than a one-off, how do you get started? Are there some resources, some tried and true examples of how other companies have done this?

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months.



Frieberg: At HP Software, we have a number of assets to help companies get started. Most companies start around the area of automation. They move up in the same six-level model -- "What are the basic capabilities I need to standardize, consolidate, and automate my infrastructure?"

As you move further up, you start to move into this idea of private-cloud architectures. Last May, we introduced the Cloud Service Automation architecture, which enables companies to come in and ask, "What is my path from where I am today to where I want to get tomorrow. How can I map that to HP’s reference architecture, and what do I need to put in place?"

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Gardner: Erik Vogel, how about future trends? Are there any developments coming down the pike that you can look in your crystal ball and say, "Here are even more reasons why that capability, maturity, and strategic view of virtualization, looking toward some of the automation benefits, will pay dividends?"

The big trend

Vogel: I think the big trend -- and I'm sure everybody agrees -- is the move to cloud and cloud infrastructures. We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

A major shift that we're seeing in IT is being facilitated by this notion of cloud. IT now has a lot of options in how they procure and source services, and they are now becoming that broker for these services. That’s probably the biggest trend and a lot of it is being driven by this transformation to more cloud-type architectures.

Gardner: Okay, last word to you Erik Frieberg. What trends do you expect will be more of an enticement or setup for automation and virtualization capabilities?

Most people, when they look at their virtualization infrastructure, aren’t going with a single provider.



Frieberg: I'd just echo what Erik said and then add one more aspect. Most people, when they look at their virtualization infrastructure, aren’t going with a single provider. They're looking at having different virtualization stacks, either by hardware or software vendors that provide them, as well as incorporating other infrastructures.

The ability to be flexible and move different types of workload to different virtualized infrastructure is key so having this choice, because that makes you more agile in the way you can do things. It will absolutely lower your cost, providing them the infrastructure that really leads to the higher quality of service that IT is trying to provide to the end users.

Gardner: It also opens up the marketplace for services. If you can do virtualization and automation, then you can pick and choose providers. Therefore, you get the most bang for your buck and create a competitive environment. So that’s probably good news for everybody.

Frieberg: Exactly.

Gardner: We've been discussing how automation governance and capabilities around virtualization can take the sting out of moving toward a strategic level of virtualization adoption. I want to thank our guests. We've had a really interesting discussion with Erik Frieberg, Vice President of Solutions Marketing at HP Software. Thank you, Erik.

Frieberg: Thank you, very much.

Gardner: And also Erik Vogel, Practice Principal and America's lead for cloud resources at HP. Thanks to you also, Erik.

Vogel: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, August 17, 2010

Modern Data Centers Require Efficiency-Oriented Changes in Networking

Transcript of a sponsored podcast discussion on modernizing data centers to make them adaptable and flexible.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Special Offer: Gain insight into best practices for transforming your data center by downloading three whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the increasingly essential role of networking in data center transformation (DCT).

As data center planners seek to improve performance and future-proof their investments, the networking leg on the infrastructure stool can no longer stand apart. The old rules of networking need to change, and that’s because specialized labor-intensive and homogeneous networking systems need to be be brought into the total modern data center picture -- to jointly cut complexity while spurring on adaptability and flexibility.

Advances such as widespread virtualization, increased modularity, converged infrastructure, and cloud computing are all forcing a rethinking of data center design. The architecture needs to accomplish the total usage pattern and requirements for both today and tomorrow -- and with an emphasis on openness, security, flexibility, and sustainability.

Networking, therefore, needs to be better architected within -- and not bolted onto -- this changeable DCT equation. Here to discuss how networking is changing, and how organizations can better architect networking into their data centers future, we're joined by two executives from HP. Please join me in welcoming Helen Tang, Worldwide Data Center Transformation Solutions Lead. Welcome to the show, Helen.

Helen Tang: Thanks, Dana. Glad to be here.

Gardner: We are also here with Jay Mellman, Senior Director of Product Marketing in the HP Networking Unit. Good to have you with us, Jay.

Jay Mellman: Thanks, Dana. Glad to be here too.

Gardner: Helen, tell us a little bit about the environment now for DCT. Why are organizations focused on this? Why is it such an important element in how they can plan their future, and begin to take more and better control over their IT efforts?

Tang: Great question, Dana. As we all know, in 2010 most IT organizations are wrestling with the three Cs -- reducing cost, reducing complexity, and also tapping the problem of hitting the wall with capacity from a base, space, and energy perspective.

The reason it's happening is because IT is really stuck between two different forces. One is the decades of aging architecture, infrastructure, and facilities they have inherited. The other side is that the business is demanding ever faster services and better improvements in their ability to meet requirements.

The confluence of that has really driven IT to the only solution of embracing change and starting a strategic project of transformation, which is a series of integrated data center projects and technology initiatives that can take them from this old integrated architecture to an architecture that’s suited for tomorrow’s growth.

Gardner: How does networking fit into that? What's the traditional role of the networking, versus where it should be?

Generic term

Tang: Let me set the context a little bit, before we dive into the role of networking within DCT. DCT is a fairly generic term. We don’t have a unified definition.

I'm a little biased, of course, coming from HP, but I do think HP has the most comprehensive answer for what people need to think about in terms of DCT. That includes four things: consolidation, whether it's infrastructure, facilities or application; virtualization and automation; continuity and sustainability, which address the energy efficiency aspect, as well as business continuity and disaster recovery; and last, but not least, converged infrastructure.

So networking actually plays in all these areas, because it is the connective tissue that enables IT to deliver services to the business. It's very critical. In the past this market has been largely dominated by perhaps one vendor. That’s led to a challenge for customers, as they address the cost and complexity of this piece.

Gardner: Okay, Jay, how has HP adapted to this in recognizing the need for change? What's been its response?

Mellman: The response that we've had is to recognize that it's one thing to say we want to have cost and more responsiveness, when we do a DCT, but it's really the understanding how and why that’s there. There has been a dramatic change in what's demanded of networking in a data center context.

The sheer number of connections, as we went from single large servers to multiple racks of servers, and to multiple virtual machines for services, all of which need connectivity. We have management integration that has been very difficult to deal with. We have different management constructs between servers, storage, and networking. Finally, there are things like cost and, more important, time to service.

HP has been recognizing that customers are increasingly not being judged on the quality of an individual silo. They're being judged on their ability to deliver service, do that at a healthy cost point, and do that as the business needs it. That means that we've had to take an approach that is much more flexible. It's under our banner of FlexFabric.

Today’s architecture, as Helen said, is very rigid in the networking space. It's very complex with lots of specialized people and specialized knowledge. It's very costly and, most importantly, it really doesn’t adapt to change. The kind of change we see, as customers are able to move virtual machines around, is exactly the kinds of thing we need in networking and don’t have.

Gardner: I'd like to get into some examples a little later, but maybe it would be good to just understand what some of the paybacks are if you do this properly, if you architect and design your data center well. If the networking, as a connective tissue, plays its role properly, what sort of paybacks are typical?

Tang: The power of transformation is great to the IT organization. It's along three axes. One is increasing agility and time to market, improving service levels to a point that we can now deliver any application perhaps in a couple of weeks, as opposed to months and months. The second aspect is about mitigating risk. We're looking at reducing manual errors through some of the automation capabilities.

Tremendous cost reduction

Last but not least, of course, is reduction in cost. We've seen just tremendous cost reduction across the board. At HP, when we did our own DCT, we were able to save over a billion dollars a year. For some of our other customers, France Telecom for example, it was €22 million in savings over three years -- and it just goes on and on, both from an energy cost reduction, as well as the overall IT operational cost reductions.

Mellman: Let me see if I can jump in and give some specifics as well of the relationship to the networking. When we look at agility and ability to improve time-to-service, we are often seeing an order of magnitude or even two orders of magnitude [improvement] by churning up a rollout process that might take months and turning it into hours or days.

With that kind of flexibility, you avoid the silos, not necessarily just in technology, but in the departments, as requests from the server and storage teams to the networking team. So, there are huge improvements there, if we look at automation and risk. I also include security here.

It's very critical, as part of these, that security be embedded in what we're doing, and the network is a great agent for that. In terms of the kinds of automation, we can offer single panes of glass to understand the service delivery and very quickly be able to look at not only what's going on in a silo, but look at actual flows that are happening, so that we can actually reduce the risk associated with delivering the services.

Finally, in terms of cost, we're seeing -- at the networking level specifically -- reductions on the order of 30 percent to as high as 65 percent by moving to these new types of architectures and new types of approaches, specifically at the server edge, where we deal with virtualization.

Customers were telling us that there were so many changes happening in their environments, both at the edge of the network, but also in the data center, that they felt like they needed a new approach.



There are opportunities where we go from more than 210 different networking components required to serve a certain problem down to two modules. You can kind of see that's a combination of consolidation, convergence, cost reduction, and simplicity, all coming together.

Gardner: Jay, give us a little historical context for this. HP has obviously been involved with networking for some time, and now the rules have changed. What's the pattern and what’s the expertise that’s been developing over the years to come together now?

Mellman: It's a real meaty question Dana and I appreciate that one. We've been in the business for 25 to 30 years and we are successfully the number two vendor in the industry selling primarily at the edge. Within the last couple of years, we've recognized that beyond the business opportunity, customers were telling us that there were so many changes happening in their environments, both at the edge of the network, but also in the data center, that they felt like they needed a new approach.

Look at the changes that have happened in the data center just in the last couple of years -- the rise of virtualization and being able to actually take advantage of that effectively, the pressures on time to market in alignment with the business, and the increasing risk from security and the increasing need for compliance.

Tie all these together, and HP felt this is the right time to come to market. The other thing is that these are problems that are being raised in the networking space, but they have direct linkage to how you would best solve the problem.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Bringing talent together

Instead of solving it with just networking technology, we can do a better job, because we can actually bring the right engineering talent together and solve it in an appropriate way. That balances the networking needs with what we can do with servers, what we can do with storage, with software, with security and with power and cooling, because often times, the solution may be 90 percent networking, but it involves other pieces as well.

We saw a real requirement from customers to come in and help them, as Helen said, create more flexibility, drive risk down, improve time to service and take cost out of the system, so that we are not spending so much on maintenance and operation, and we can put that to more innovation and driving the business forward.

A couple of these key rules drive simplicity. The job of a network admin needs to be made as simple and have as much automation and orchestration as the jobs of SysAdmins or SAN Admins today.

The second is that we want to align networking more fully with the rest of the infrastructure, so that we can help customers deliver the service they need when they need it, to users in the way that they need it. That alignment is just a new model in the networking space.

Finally, we want to drive open systems, first of all because customers really appreciate that. They want standards and they want to have the ability to negotiate appropriately, and have the vendors compete on features, not on lock-in.

Open standards also allow customers to pick and choose different pieces of the architecture that work for them at different points in time. That allows them, even if they are going to work completely with HP, the flexibility and the feeling that we are not locking them in. What happens when we focus on open systems is that we increase innovation and we drive cost out of the system.

The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point.



What we see are pressures in the data center, because of virtualization, business pressures, and rigidity, giving us an opportunity to come in with a value proposition that really mirrors what we’ve done for 25 years, which is to think about agility, to think about alignment with the rest of IT, and to think about openness and really bringing that to the networking arena for the first time.

Tang: The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point. We're able to deliver a single integrated system, where everything can be managed as a whole that delivers incredible simplicity and automation as well as significant reduction in the cost of ownership.

Gardner: We’ve talked quite a bit about the technical issues that can lead to rigidity and complexity, being ill-equipped to support heterogeneity, and so forth. But there is a non-technical side of this problem as well: different teams, different cultures, lack of collaboration, and lack of a common language in many cases. What does moving toward the DCT level do for that?

Mellman: What a great challenge organizations have. Increasingly we do see our customers pulling these separate teams together more and more. It’s just been forced to happen, because the applications are more complex. Where we used to have single large applications, now data is pulled from a lot of different places.

We’ve got all sorts of different connectivity going on and going to different storage banks and server banks. But, what you have to do, if you're going to do this effectively, is not assume that you're going to get everyone understanding each other. You want to build systems that allow people to be in their universe and be in their roles, but collaborate better.

Different technologies

It’s sort of the old, "You can’t assume away the problem." I can’t just build product and assume that the networking team is going to suddenly go work with the server team. As you said, they’ve got different languages and frankly, it’s different technologies. What you want is the ability to have management tools and capabilities that allow these teams to better work together.

For example, we have a product called Virtual Connect, which has a management concept called Virtual Connect Enterprise Manager. It allows the networking team and the sever teams to work off the same pool of data. Once the networking team allocates connectivity, the server team can work within that pool, without having to always go back to the networking team and ask for the latest new IP address and new configurations.

HP is really focused on how we bring the power of that orchestration, and the power of what we know about management, to allow these teams to work together without requiring them, in a sense, to speak the same language, when that’s often the most difficult thing that they have to do.

Tang: I want to add to that. To some extent, Dana, it really is a cultural shift, and that kind of cultural change is going to meet resistance along the way. It needs to be effected all the way from the top down. This needs to be something that the CIO actively promotes. HP actually has a service offering, which really helps with this --- it’s called the Datacenter Transformation Experience Workshop.

We bring all the stakeholders into the same room -- the server team, networking team, storage, and facilities, and even the financial side of IT, so that we achieve alignment across all of them. They talk about the standard issues everybody needs to tackle when thinking about doing any kind of transformation, across not just the technology areas, but also processes, management tools, governance, etc. Coming out of it, you do achieve alignment and you come up with an actionable roadmap specifically customized for that organization.

We can’t afford to have customers have to rip and replace or throw away their existing infrastructure for the promise of some new future.



Gardner: Okay. Moving to the solution level here and looking at the future, a lot of organizations really can’t predict exactly what situation they are going to be in, in 5 or 7 years and data centers are often designed to last 20 years or more. So where does the ability to forecast come in and also bring in the legacy? I'm wondering about what the design elements are here that allow for the past, present, and the future to all be accommodated, particularly with the networking part of the equation.

Mellman: A point that HP is always taking a look at is this issue that we can’t afford to have customers have to rip and replace or throw away their existing infrastructure for the promise of some new future.

There are quite a few vendors out there who are saying that the future is all about cloud and the future is all about virtualization. That ignores the fact that the lion's share of what's in a data center still needs to be kept. You want an architecture that supports that level of heterogeneity and may support different kinds of architectural precepts, depending on the type of business, the types of applications, and the type of pressures on that particular piece.

What HP has done, and we do this in combination with HP Labs and with our services organization, is try to get a handle on what is that future going to look like without prescribing that it has to be a particular way. We want to understand where these points of heterogeneity will be and what will be able to be delivered by a private cloud, public cloud, or by more traditional methods and bring those together, and then net it down to architectural things that makes sense.

We realize that there will be a high degree of virtualization happening at the server edge, but there will also be a high degree of physical servers for especially some big apps that may not be virtualized for a long time, Oracle, SAP, some of the Microsoft things. Even when they are, they are going to be done with potentially different virtualization technologies.

Physical and virtual

Even with a product like Virtual Connect, we want to make sure that we are supporting both physical and virtual server capabilities. With our Converged Network Adaptors, we want to support all potential networking connectivity, whether it’s Fibre Channel, iSCSI, Fibre Channel over Ethernet or server and data technology, so that we don’t have to lock customers into a particular point of view.

We recognize that most data centers are going to be fairly heterogeneous for quite a long time. So, the building blocks that we have, built on openness and built on being managed and secure, are designed to be flexible in terms of how a customer wants to architect. At the core and aggregation levels in the data center what we do to actually power and make sure that people get connectivity, we are already looking at technologies like our Intelligent Resilient Framework, which allows clustering.

We're looking at a tier, across-tiers, across-geographies, and how customers can use that to create standard three-tier, a new collapsed two-tier architecture, distributed active geographically dispersed data centers that are managed from a single IP address.

We don't want to have to tell a customer they have to take something in a particular way, when we know that not only the state of their equipment, what their assets are in IT, and what they may have to accomplish in five years may be something that hasn’t even been thought of yet.

Gardner: Another thing that I often hear is the need -- when compliance, regulations, and security are brought into the equation -- of doing that all comprehensively. Trying to do a spot security approach and compliance approach runs into trouble. So, is there a payback here beyond the technical, when you have a comprehensive data center architecture in mind that includes the networking elements along with these others, so that you get a payback in terms of being able to manage comprehensively, and therefore, catch those security practice quirks and/or provide for regulatory and compliance adhesion?

Step one is to protect the infrastructure, and we can do that both at the infrastructure level and increasingly at the application level.



Mellman: Let's focus on the security and compliance issue. They are really two issues here that are critical. The first is what I might call the traditional network security and protection. How do I use the components to actually protect the infrastructure, but more importantly the applications and the data?

So we have the leading intrusion prevention system on the market today, powered by our TippingPoint Assets, that helps really do this in a very easy fashion. Instead of having to have policies that you have to continually update on different firewalls and on different parts of the network and servers and all across the infrastructure, you impose a policy centrally and they're automatically managed within the TippingPoint infrastructure.

The idea is that that’s automatically updated with leading researchers from around the world, so that the infrastructure never has to experience the malware. It's never taken down. It's zero day protection. You do it in an easy way. And easy is critical, because security is the place where, when it starts to get hard, people stop doing it. They stop updating their viruses. They stop their policies. They figure nothing has happened.

Step one is to protect the infrastructure, and we can do that both at the infrastructure level and increasingly at the application level. The compliance and risk management then gets to how well we're actually operating our environments. So whether it's payment card industry (PCI) or the Health Insurance Portability and Accountability Act (HIPAA), those requirements mean that you have to have a more orchestrated and, in some cases, automated approach across the whole infrastructure.

If an organization has a separate networking structure, think of the overhead involved in trying to help that come to the table with the rest of the IT organization in meeting those standards. It's going to be much, much easier, when we take a converged approach, when the network is actually a first citizen in the IT organization. If nothing else, it will be more efficient. But, in many cases we actually see it being more effective as well in terms of being able to proactively meet compliance requirements and the needs of particular verticals in doing that.

Reduction in risk

We see it really in two fashions that we can improve not only the security and protection of the network, the applications and the assets, but also the alignment and the reduction in risk plays into how well we can help an organization comply.

Gardner: I'm fond of hearing how this works in practice. Are there any customer stories perhaps, either named ones or use cases that you can share, which demonstrate what happens when you properly architect, converge, and bring networking into the modern data center approach?

Mellman: We have quite a few. The best one that gets people’s attention is our own HP IT. They looked at a variety of different approaches to architecting their data centers and selected HP networking, so much so that they said, "Even if you don’t buy the company, in this case 3Com, we're going to use the product."

They’ve been able to now re-architect one of their six data centers that run all of HP -- that is all 300,000 employees, plus partners, plus supply chain, by re-architecting around HP networking gear that includes Core Aggregation, Edge, and the Security Products.

They're getting half the power utilization and twice the performance out of the HP device than they were getting out of their previous vendor, which is quite a good outcome in and of itself.

Instead of having to cable everything multiple times and have people sent in to remove wires, it takes a wire-once approach.



But, the other thing it demonstrates that is really critical is that customers usually think they have to bite everything off at once. They moved one of these data centers, but it's completely interoperable now with their existing infrastructure or their previous infrastructure that was built on a competitor's technology. So, we're seeing that those kinds of benefits are easily gainable in those environments.

Gardner: I should point out just for our listener’s sake that you did actually go out and buy 3Com, and it's now fully part of HP, isn’t that right?

Mellman: That’s exactly right. It fits into the product lines that we are offering as HP Networking -- the A-Series, which is our most advanced, and the E-Series, which is our essential and comes with the lifetime warranty. Our V-Series is targeted at the small and medium business (SMB) market, and then the S-Series is powered by TippingPoint and is really focused on enterprise network security.

Another example is what we're seeing at the server edge with a product like Virtual Connect, where we virtualize the connections at the back of the servers. Instead of having to cable everything multiple times and have people sent in to remove wires, it takes a wire-once approach, virtualizes the connections and the bandwidth, so that you can operate four separate virtual pipes on a single 10 gig pipe and deliver huge amounts of value.

What we're seeing when customers move to this approach in a data center is up to 95 percent reduction in network gear required, upwards of a 70 percent to 80 percent reduction in cabling, time to market improvements, taking weeks to hours or days, and dramatic cost reductions, because it's simply much less gear.

Real opportunities

These are real things, real opportunities, that customers could take advantage of that allow them to operate more effectively and have dramatically less complexity in their data center architectures.

Gardner: For those interested in pursuing more and getting more information about the convergence and the commonality of the design for networking, as well as other components within the modern data center, where can they go? What are some resources and how do they get started?

Tang: On a high level, a good place to go is www.hp.com/go/dct. That’s got all kinds of case studies, video testimonials, and all those resources for you to see what other customers are doing. As I recommended earlier the Data Center Transformation Experience Workshop is a very valuable experience. It's only half a day and has no PowerPoint. Customers really love that, and that’s something I highly recommend.

Mellman: What we ask in terms of networking is for customers to think about where their biggest pain points are. Certainly, exploring hp.com is a good start, but it's understanding where a customer is feeling the most pain. Is it at the server edge, because you're struggling with how many virtual servers you have to manage and what it is taking to deal with that? Is it at the network core, because you feel that your current gear is out of gas and maintaining a hierarchical approach to networking, when you’ve got to fan out across the data center, is just causing too much pain and too much cost? It may be security that's the biggest pain point.

Once you identify that, we simply advise engaging with HP’s services organization or our sales organization or one of HP’s partners and having the discussion of what's possible.

Start with a small workable project and get a good migration path towards full transformation.



What we hear from our customer is, "The level of simplicity that you can bring, the level of automation that’s possible or orchestration, the fact that you're bringing breakthroughs that really demonstrate convergence." Whether it’s FCoE between networking and storage, a Virtual Connect and the virtualization of connectivity for wire-once on demand between servers and networking, or the deep integration between networking and the management software at HP, they recognize that convergence at the technology level is paying out in real benefits.

It’s having the customer just step back and say, "Where is my biggest pain point?" The nice thing with open systems is that you can generally address one of those, try it out, and start on that path. Start with a small workable project and get a good migration path toward full transformation.

Gardner: I suppose that the pain that most folks are dealing with will probably only increase should they stand still. We're starting to see recovery in a number of economies around the globe. Things like virtualization have certainly been growing quickly and implemented more deeply into organizations, and we're looking at a lot of interesting private clouds and other infrastructure approaches to improve on efficiency and that flexibility. So the pain, the rationale for looking at networking differently and rethinking it will probably only get worse, I expect.

Mellman: That’s going to be true for a number of technology areas, but I think that what we really see are customers looking at convergence and the opportunity with cloud and either getting confused with it being the means or the end.

Means to an end

We see a lot of vendors out there thinking about virtualization and cloud as the end in and of itself. In reality, HP feels very strongly that these are the means to the end. The end is how you deliver services, and how you align, and allow someone to do that in a cost-effective way that we believe will use convergence, but helps them have the flexibility to deliver that service and meet changing services requirements over time effectively.

Customers that don’t look at this in the appropriate way will find themselves falling further and further behind. A lot of this is accelerating and what we believe at HP, and we’ve said for years, is that we focus on the outcome. So, if it turns out that virtualization is an inappropriate tool, we will leverage that to the extent that it makes sense -- and we're doing that today.

But, we won’t sit there and say, "Here, we're going to give you a strict virtualization solution, because we may think that that’s only part of the way to the end." Our goal is to help customers focus on what they are trying to accomplish and then bring them the appropriate set of products, solutions, services, and even things from partners, to make that happen.

If the appropriate way for a customer to be able to deliver that service is through a private cloud or a public cloud that gets interwoven with the other offerings that they are going to deliver via resources on premise, then we are here to help them do that. That’s really where we think the rubber will meet the road, and where customers that take the approach that HP is delivering will end up ahead.

Gardner: Great. We’ve been discussing how networking is changing and how organizations can better architect networking into their data centers for simplicity and automation, and how the issues impacting the business as well as the technology side of the house. We've been joined by Helen Tang, Worldwide Datacenter Transformation Solutions Lead at HP. Thanks, Helen.

Tang: Thank you, Dana.

Gardner: And we’ve also been joined by Jay Mellman, Senior Director of Product Marketing in HP Networking. Thank you.

Mellman: Thank you, Dana, for the opportunity.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast discussion on modernizing data centers to make them adaptable and flexible. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Friday, August 06, 2010

Cloud Computing's Ultimate Value Depends Open PaaS Models to Avoid Applications and Data Lock-In

Transcript of a sponsored podcast discussion on open markets for cloud computing services and the need for applications that can move from one platform to another with relative ease.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: WSO2.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on openness, portability and avoiding unnecessary application lock-in in the use of cloud computing.

A remaining burning question about the value and utility of cloud computing is whether applications and data can move with relative ease from cloud to cloud -- that is, across so-called public- and private-cloud divides, and among and between various public cloud providers.

For enterprises to determine the true value of cloud models -- and to ascertain if their cost and productivity improvements will be sufficient to overcome their disruptive shift to cloud computing -- they really must know the actual degree of what I call "application fungibility."

Fungible means being able to move in and out of like systems or processes, like a bushel of corn is fungible regardless of the market you buy it in. You can buy and sell a bushel of corn as a commodity across multiple markets and across multiple buyers: They know what they're getting.

But what of modern IT applications? Wouldn’t cloud models be far more attractive, and hybrid cloud models much more attainable, if applications (or instances of applications) were largely fungible -- able to move from cloud to cloud -- and still function?

Application fungibility would, I believe, create a real marketplace for cloud services, something very much in the best interest of enterprises, small and medium businesses (SMBs), independent software vendors (ISVs), and developers.

Fungible applications could avoid the prospect of swapping on-premises platform lock-in for some sort of cloud-based service provider lock-in and, perhaps over time, prevent being held hostage to rising cloud prices.

Today, we'll examine how enterprises and developers should be considering the concept of application fungibility, both in terms of technical enablers and standards for cloud computing, and also consider how to craft the proper service-level agreements (SLAs) to promote fungibility of their applications.

Here to discuss how application fungibility can bring efficiency and ensure freedom of successful cloud computing, we're now joined by Paul Fremantle, Chief Technology Officer and Co-Founder at WSO2. Welcome back to BriefingsDirect, Paul.

Paul Fremantle: Hi, Dana. Nice to see you again.

Gardner: We're also here with Miko Matsumura, author of SOA Adoption for Dummies and an influential blogger and thought leader on cloud computing subjects. Welcome back, Miko.

Miko Matsumura: Great to be here.

Gardner: So, as for this ability to bring an open vision of cloud computing, I think many people have a vision that it's perhaps a little bit more grand than the current reality. Let's go to you first, Miko. What's the difference between the popular vision of cloud computing and what's really available now? Is there much fungibility available?

Low fungibility

Matsumura: Fungibility is very, very critical, and one thing I want to emphasize is that the fungibility level of current solutions is very low. It's very logical to understand the history of this.

One of the things that really we need to understand about cloud computing is the word that you used in introducing the topic, which is this concept of "disruptive," the notion that you can have this kind of elasticity and the application can actually scale pretty radically within cloud a environment. This is one of the primary attractive aspects of entering into a cloud paradigm. So, that's a really neat idea.

The economics of upscaling and downscaling as a utility is very attractive. Obviously, there are a lot of reasons why people would start moving into the cloud, but the thing that we're talking about today with this fungibility factor is not so much why would you start using cloud, but really what is the endgame for successful applications.

The thing that's really intriguing is that, if your application in the cloud is unsuccessful and nobody uses it, it doesn’t really matter. You don’t need to move it. You don’t need to pay for it. In fact, the requirement that you don’t pay for an app that isn’t successful is a very good benefit to the business.

The area where we are specifically concerned is when the application is more successful than in your wildest dreams. Now, in some ways what it creates is almost an unprecedented leverage point for the supplier. If you're locked in to a very high-transactional, high-value application, at that point, if you have no flexibility or fungibility, you're pretty much stuck. The history of the pricing power of the vendor could be replicated in cloud and potentially could be even more significant.

In terms of a direct answer, fungibility, as its being offered today, is very poor and there are very few solutions in the market that offer a way for people to pragmatically move, once things start to take off and are successful.

Gardner: Paul Fremantle, do you also share this perception that there isn't very much fungibility? Why is it that people would allow themselves to get in a position where their applications are locked in, perhaps even more severely than had been in an on-premises deployment?

Fremantle: That's a really interesting question, Dana. The reality of cloud is that people are jumping on it, and I can understand why. In the current situation, many infrastructure teams and infrastructure providers within large organizations unfortunately have got to the point where it takes many months to provide a piece of hardware for a new app.

Just roll back a few years. Imagine it took 12 months to build the app, and it took 3 or 4 months to provide the hardware. That's fine. You have 8 months of developing, before you even need to go talk to the infrastructure guys and say, "I need some hardware for this."

Roll forward now, and people are building apps in a month, a week, or even a day, and they need to be hosted. The infrastructure team unfortunately hasn’t been able to keep up with those productivity gains.

Now, people are saying, "I just want to host it." So, they go to Amazon, Rackspace, ElasticHosts, Joyent, whoever their provider is, and they just jump on that and say,"Here is my credit card, and there is a host to deploy my app on."

No way out

The problem comes when, exactly as Miko said, that app is now going to grow. And in some cases, they're going to end up with very large bills to that provider and no obvious way out of that.

You could say that the answer to that is that we need cloud standards, and there have been a number of initiatives to come up with standard cloud management application programming interfaces (APIs) that would, in theory, solve this. Unfortunately, there are some challenges to that, one of which is that not every cloud has the same underlying infrastructure.

Take Amazon, for example. It has its own interesting storage models. It has a whole set of APIs that are particularly specific to Amazon. Now, there are a few people who are providing those same APIs -- people like Eucalyptus and Ubuntu -- but it doesn’t mean you can just take your app off of Amazon and put it onto Rackspace, unfortunately, without a significant amount of work.

As we go up the scale into what's now being termed as platform as a service (PaaS), where people are starting to build higher level abstractions on top of those virtual machines (VMs) and infrastructure, you can get even more locked in.

When people come up with a PaaS, it provides extra functionality, but now it means that instead of just relying on a virtualized hardware, you're now relying on a virtualized middleware, and it becomes absolutely vital that you consider lock-in and don’t just end up trapped on a particular platform.

One of the things that naturally evolved, as a result of the emergence of a common foe, is this principle of unification, openness, and alliance.



Gardner: Miko, we used to hear, going on 15 years ago, the notion of "write once, run anywhere," and that was very attractive in that time. But, I think what we're pointing out now is this ability to write once and deploy anywhere.

Maybe you could tell us how "write once, run anywhere" got going, because I know that at that time you were involved quite a bit with Java. Is there a sense of an offspring with cloud that we should look to in terms of this ability of fungibility?

Matsumura: That that's a very good and exciting parallel. On the development side, one of the things that naturally evolved, as a result of the emergence of a common foe, is this principle of unification, openness, and alliance.

It's a funny thing. It goes way farther back than even the ancient Greeks banding together to attack the Hittites at Troy, or the moon landing, where the United States was unified against the Russians. Every major advance in technology seems to be associated with everybody getting together in order to fight a common foe.

So, it's a very funny thing to see, because "write once, run anywhere" was really just a response, in Java terms, to the emergence of a dominant Microsoft, and in some ways it's an interesting emergent phenomenon.

Emergent players

The things to look at in the cloud world are who are the emergent dominant players and will Amazon and Google or one of these players start to behave as an economic bully? Right now, since we're in the early days of cloud, I don't think that people are feeling the potential for domination that would drive such a friendly, open behavior.

People who are thinking ahead to the endgame are pretty clear that that power will emerge and that any rational, publicly traded company will maximize its shareholder value by applying any available leverage. Because, if you have leverage against the customer, that produces very benevolent looking quarterly returns.

Gardner: Paul Fremantle, as I mentioned a little earlier in the set up, it's now the time when enterprises are starting to do their cost benefit analysis to ask what makes sense to keep close to their vests, within their control, and on-premises, and what might be more of a commodity function, application, or service that we would look to a cloud model.

But, it seems to me that you can't really take that without considering what degree of fungibility is involved. So, from your perspective, what are the potential economics here?

Fremantle: The economics are really interesting, and there are two ways of looking at them. A lot of people are looking at economics to say, "What is the internal cost of hosting? Can I move my CAPEX to OPEX and pay-per-use?"

Unfortunately, the big issue that comes in there, in most people's mind, is security. Can they move things to a public cloud because of security? Do they need a private cloud? Those are the simplistic first steps people go through as they start looking at cloud.

That's a very important aspect to look at when you move to cloud, both software as a service (SaaS) and the lower level functions, because you don't want to move something that you consider a core strength out to be a generic service.



Two other interesting angles need to be looked at. The first of those is about exactly what you just came up with, which is, what is my competitive advantage? Where do I particularly gain advantage over my competitors, and through which services?

That's a very important aspect to look at when you move to cloud, both software as a service (SaaS) and the lower-level functions, because you don't want to move something that you consider a core strength out to be a generic service.

So, if you think that your proprietary algorithms for customer relationship management (CRM) are absolutely vital to the success of your organization, the last thing you want to do is dump those and go to Salesforce.com. That's the first aspect.

The second aspect, is can you apply a portfolio model? Can you look at the aspects that are high value to you, the aspects which are business as usual, and , "Well, I can get not just basic cost improvement by moving my customer relationship to Salesforce, but can I still apply my own special sauce, even when I am using low-value cloud services?"

Simple example

This is just a really simple example. When WSO2 uses our CRM, which is a cloud-based provider, it's a Sugar On-Demand. We also have mashups, which we host in the cloud, that take those Sugar On-Demand systems and mash them up to provide extra value to us.

So, we're going beyond the basic commodity service and starting to get extra value. To me, that's one of the really cool things to think about in cloud. Not just thinking about private cloud, public cloud, and hybrid, but about, how can I mashup the internal secret sauce of my company, the stuff that gives me competitive advantage, with the low-cost commodity services on the web, and start to get more out of those?

Then, if you think about that in a fungible environment, where to move those pieces of code, how to host it, where to run it, then you start to get a dynamic IT organization that can drive business value for the company.

Gardner: Miko, I mentioned a little earlier the idea of a marketplace, where competition and freedom of movement and transparency would have a positive effect and allow the buyers of services to pick and choose freely. Therefore the onus goes to the provider to have the best service at the best price.

What I think Paul just described is an opportunity where processes are composed of services, some of which might be coming from a cloud of clouds, both on-premises and off. So, it seems as if an insidious movement toward inevitable cloud services involvement with your business processes is under way, but without necessarily recognizing that you might not be in a true marketplace.

To some extent, there already is a marketplace, but the marketplace radically lacks transparency and efficiency. It's a highly inefficient market.



Do you think that this is going to happen whether people plan for it or not, and should they therefore recognize that they need a marketplace now, rather than waiting for these services to be actually put well into use within their organizations?

Matsumura: It's always wonderful to get the clear thinking rationality that comes from analyzing things, like you guys. From my perspective, to some extent, there already is a marketplace -- but the marketplace radically lacks transparency and efficiency. It's a highly inefficient market.

The thing that's great is, if you look at rational optimization of strategic competitive advantage, then what Paul says is exactly the perfect mental model. "My company that makes parts for airplanes is not an expert in keeping PC servers cool and having a raised floor, security, biometric identification, and all kinds of hosting things." So, maybe they outsource that, because that's not any advantage to them.

That's perfectly logical behavior. I want to take this now to a slightly different level, which is, organizations have emergent behavior that's completely irrational. It's comical and in some ways very unfortunate to observe.

To create a little metaphor, in the history of large-scale enterprise computing, there has long been this tension between the business units and the IT department, which is more centralized. In a way, the tension Paul alluded to is this idea that the business department is actually the frustrated party, because they have developed the applications in a very short time. The lagging party is actually the IT department.

It's a bit like what happened to the actor Mel Gibson. He left his wife and his kids and went off with a mistress. In the metaphor, the seduction of the cloud, and how easy it is, is really a wonderful attraction for a man (or enterprise) who is ostensibly married to his own IT department. And, that IT department maybe is not so sexy as the cloud service.

Eventual disappointment

So, there is this unfortunate emergent property that the enterprise goes after something that, in the long run turns out to be very disappointing. But, by the time the disappointment sets in, the business executives that approved this entry point into the cloud are long gone. They've gotten promotions, because, their projects worked and they got their business results faster than they would have if they had actually done it the right way and actually gone through IT.

So, it puts central IT into a very uncomfortable position, where they have to provide services that are equal to or better than professionals like Amazon. At the same time, they also have to make sure that, in the long-term interest of the company, these services have the fungibility, protection, reliability, and cost control demanded by procurement.

The question becomes how do you keep your organization from being totally taken advantage of in this kind of situation, and how do you avoid the Mel Gibson-esque disappointment of whatever happened after you left your wife and went with your new sexy girlfriend?

Gardner: Are you sure you don’t want to bring up Tiger Woods at this point?

Matsumura: Another, perhaps more universally understood metaphor.

Gardner: I was thinking of multiple clouds in that case.

Well, we've certainly defined the problem. Now, what can we do about it? Clearly, if you have some good lawyers and some history of dealing with software licenses, you're going to be very careful about how you craft your SLAs.

What we are trying to do at WSO2 is exactly to solve that problem through a technical approach, and there are also business approaches that apply to it as well.



We would hope that your business units would do that as well as your IT department or within consultation with the IT department or at least the legal department. Clearly, one buttress or defense against lock-in and lack of a good relationship over time with your cloud provider would be the agreement, the legal bond.

Also, we mentioned standards. There need to be standards that people can point to and say, "You must adhere to this or I won't do business with you." They seem to be slow in coming.

Paul Fremantle, what else is left? Is there a technology perspective, a middleware perspective, or something that might borrow from the Java approach that could reduce the risk, but allow companies to also pursue cloud computing in an efficient market?

Fremantle: That’s a very nice lead in, Dana. What we are trying to do at WSO2 is exactly to solve that problem through a technical approach, and there are also business approaches that apply to it as well.

The technical approach is that we have a PaaS, and what’s unique about it is that it's offering standard enterprise development models that are truly independent of the underlying cloud infrastructure.

Infrastructure independent

What I mean is that there is this layer, which we call WSO2 Stratos, that can take web applications, web application archive (WAR) files, enterprise service bus (ESB) flows, business process automation (BPA) processes, and things like governance and identity management and do all of those in standard ways. It runs those in multi-tenant elastic cloud-like ways on top of infrastructures like Amazon, as well as private cloud installments like Ubuntu, Eucalyptus, and coming very soon, VMware.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

What we're trying to do is to say that there is a set of open standards, both de facto and de jure standards, for building enterprise applications, and those can be built in such a way that they can be run on this platform -- in public cloud, private cloud, virtual private cloud, hybrid, and so forth.

What we're trying to do there is exactly what we've been talking about. There is a set of ways of building code that don’t tie you into a particular stack very tightly. They don’t tie you into a particular cloud deployment model very tightly, with the result that you really can take this environment, take your code, and deploy it in multiple different cloud situations and really start to build this fungibility. That’s the technical aspect.

Before I hand it back to you, I also want to talk about the business aspect, because technology doesn’t live on its own. One of the things that’s very important in cloud is how you license software like this. As an open source company, we naturally think that open source has a huge benefit here, because it's not just about saying you can run it any way. You need to then be able to take that and not be locked into it.

Our Stratos platform is completely open source under the Apache license, which means that you are free to deploy it on any platform, of any size, and you can choose whether or not to come to WSO2 for support.

We think we're the best people to support you, but we try and prove that every day by winning your business, not by tying you in through the lawyers and through legal and licensing approaches.



Of course, we think we're the best people to support you, but we try and prove that every day by winning your business, not by tying you in through the lawyers and through legal and licensing approaches.

Gardner: Miko, we seem to have quite a few technologies, open source, licenses, and standards in place for doing enterprise software. Can we overlay what is a longstanding approach to openness and interoperability in the traditional enterprise on to the cloud and, in a sense, virtualize the cloud services in such a way that we can still use the tools and middleware, so we don’t get locked in?

Matsumura: The great thing that Paul is articulating is essentially in regard to a sort of promise. What's exciting about the promise is that trust has some very interesting properties, which is that one of the things you need to do is to look at the will and the ability of the partner.

As a consumer of cloud, you need to be clear that the will of the partner is always essentially this concept of, "I am going to maximize my future revenue." It applies to all companies, and dare I say, WSO2 included.

WSO2, as a new entrant, as a disruptive entrant, and as a company that has built this incredible technology, is both from a technological perspective and contractual perspective, using open source and empowering the promise in a very manifest form by providing value-added services.

As Paul said, to prove to you each day that they are going to be the best support for your deployment, is almost like a laying down of arms. If the opponent is disarmed from the get-go, then their ability to stick you in the back, when you are not looking, is gone.

Thing that’s fascinating about it is that, when a vendor says "Believe me," you look to the fine print. The fine print in this case is the Apache license, which has incredible transparency.

Free to go

It becomes believable, as a function, being able to look all the way through the code, to be able to look all the way through the license, and to realize, all of a sudden, that you're free. If someone is not being satisfactory in how they're behaving in the relationship, you're free to go.

If you look at APIs, where there is something that isn’t that opaque or isn’t really given to you, then you realize that you are making a long-term commitment, akin to a marriage. That’s when you start to wonder if the other person is able to do you harm and whether that’s their intention in the long run.

Fremantle: This is really interesting. Let me tell you a slightly opaque story and I'll try and bring it back around to this.

The school I went to was run by monks, and one of these monks was 80 years old and had been in the monastery for 60 years or something. A reporter asked him, "Don’t you miss the freedom? Don’t you hate being locked-in to this monastery?" And the monk said something really interesting, "I choose every morning to remain here," he said.

Now, what’s WSO2's lock-in? What Miko has been trying to politely say is that every vendor, whether it’s WSO2 or not, wants to lock in their customers and get that continued revenue stream.

Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come.



Our lock-in is that we have no lock-in. Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come. We think that’s what entices customers to stay with us, and that’s a really exciting idea.

It's even more exciting in the cloud era. It was interesting in open source, and it was interesting with Java, but what we are seeing with cloud is the potential for lock-in has actually grown. The potential to get locked-in to your provider has gotten significantly higher, because you may be building applications and putting everything in the hands of a single provider; both software and hardware.

There are three layers of lock-in. You can get locked into the hardware. You can get locked into the virtualization. And, you can get locked into the platform. Our value proposition has become twice as valuable, because the lock-in potential has become twice as big.

Gardner: If you were to find a cloud provider that shared that long-term view, they don’t lock in, but their value proposition locks in, but for the right reasons, then the other cloud providers would be at a distinct disadvantage. So, do we have an opportunity now for a marketplace with an open middleware approach? And who are the cloud providers that should or perhaps will follow suit?

Fremantle: There is definitely an opportunity for an open market. I don’t want to go into naming names, but certainly you're bound to see in the cloud market a consolidation, because it is going to become price sensitive, and in price sensitive markets you typically see consolidation.

Two forms of consolidation

What I hope to see is two forms of consolidation. One is people buying up each other, which is the sort of old form. What would be really interesting, to circle back to what Miko said at the very beginning, is that it would be really nice to see consolidation in the form of cloud providers banding together to share the same models, the same platforms, the same interfaces, so that there really is fungibility across multiple providers, and that being the alternative to acquisition.

That would be very exciting, because we could see people banding together to provide a portable run time.

One of the really interesting things that you can get with fungibility is what we have in various markets, the idea of options, derivatives, and all of that. That would be cool. Imagine that you need to get your jobs done every Friday at lunchtime. And, Friday lunchtime is an expensive time to get your jobs done, because everybody needs compute time at Friday lunchtime.

In a truly fungible marketplace, you could buy options on having compute power on a Friday lunchtime at a certain price. If you don’t end up needing them, you could sell that at a higher price to someone else who does need it. Then, you start to get the real flexibility that markets provide in the computing industry. That would be pretty cool.

Gardner: So what about that Miko? If Paul’s vision holds out, at least a critical mass of cloud providers pull together and recognize they have a common destiny and that working together at a certain level will provide them a better future long-term, one that involves cloud fungibility.

People are branding electricity as wind power, renewable power, green power, and that has certain different economic dynamics. I think we will see similar things emerge in the cloud.



Then these options and derivative models, where you can basically eke out the most efficient way -- both from the supplier as well as the acquirer of the service, and incidentally probably reduce the carbon footprint across the board -- that would be a good outcome. Do you think that that’s pie in the sky? What needs to happen for that sort of vision to unfold?

Matsumura: What we're talking about is the inevitable market consequence of commoditization. So what Paul is speaking to is something a lot like the spot energy market, which already exists. You have direct conversion of one form of electricity, which is a sort of common denominator of power transport, into other forms of electricity. That’s a very interesting thing.

People are branding electricity as wind power, renewable power, green power, and that has certain different economic dynamics. I think we will see similar things emerge in the cloud.

The thing that really critical though is when this is going to happen. There is a very tired saying that those who do not understand history are doomed to repeat it. We could spend almost decades in the IT industry just repeating the things of the past by reestablishing these kind of dominant-vendor, lock-in models.

A lot of it depends on what I call the emergent intelligence of the consumer. The reason I call it emergent intelligence is that it isn’t individual behavior, but organizational behavior. People have this natural tendency to view a company as a human being, and they expect rational behavior from individuals.

Aggregate behavior

But, in the endgame, you start to look at the aggregate behaviors of these very large organizations, and the aggregate behaviors can be extremely foolish. Programs like this help educate the market and optimize the market in such ways that people can think about the future and can look out for their own organizations.

The thing that’s really funny is that people have historically been very bad at understanding exponential growth, exponential curves, exponential costs, and the kind of leverage that they provides to suppliers.

People need to get smart on this fungibility topic. I appreciate, Dana, that you're helping out with this. If we're smart, we're going to move to an open and transparent model. That’s going to create a big positive impact for the whole cloud ecosystem, including the suppliers.

Gardner: Paul Fremantle, Miko seems to think that this bright possible future could happen fast, or it could happen 30 years from now. How do we make sure it happens fast?

Isn’t there a built-in economic incentive? If there is a sufficient level of transparency, where the most efficient model becomes the most low-cost model, and therefore in a commoditized environment, it's where the most consumers go.

It's up to the consumers of cloud to really understand the scenarios and the long-term future of this marketplace, and that’s what's going to drive people to make the right decisions.



Is there something about WSO2’s approach and this notion of a shared destiny that almost guarantees it to be the lowest-cost provider? That is to say, wouldn't the commercial lock-in provider naturally have to be more expensive in this cloud environment?

Fremantle: There is that. One of the most important things, though, is what Miko just said about education. It's up to the consumers of cloud to really understand the scenarios and the long-term future of this marketplace, and that’s what's going to drive people to make the right decisions. Those right decisions are going to lead to a fungible commodity marketplace that’s really valuable and enhances our world, rather than dis-enhances it or makes it less good.

The challenge here is to make sure that people are making the right, educated decisions. From my perspective, obviously, I'd like people to try out WSO2 Stratos. But, at a higher level than that, I'd really like people to make informed decisions, when they choose a cloud solution or build their cloud strategy, that they specifically approach and attack the lock-in factor as one of their key decision points. To me, that is one of the key challenges. If people do that, then we're going to get a fair chance.

I don’t care if they find someone else or if they go with us. What I care most about is whether people are making the right decision on the right criteria. Putting lock-in into your criteria is a key measure of how quickly we're going to get into the right world, versus a situation where people end up where vendors and providers have too much leverage over customers.

Gardner: So, as you're doing that cost-benefit analysis and looking at these new models of cloud, you need to go beyond just the notion of doing away with CAPEX costs and move to OPEX cost. You need to think about what the long-term operating cost would be if there is a lock-in variable involved?

Fremantle: Not just the operating costs, but the flexibility, the freedom, and the ability to achieve your long-term objectives.

Gardner: Any last words on this notion of what to look for in terms of your cost-benefit analysis, Miko?

Worth exploring

Matsumura: Without being overly skewed in this discussion, WSO2 has provided a very interesting topic of conversation worth exploring. Smart organizations need to understand that it's not any individual's decision to just run off and do the cloud thing, but that it really has to combine enterprise architecture and ... cautionary procurement, in order to harness cloud and to keep the business units from running away in a way that is bad.

One of the culprits in this emergent behavior is the short lifespan of the CIO. The CIOs tend to job churn and they tend to be a little shortsighted about cutting costs. So, they get into an unholy alliance with business units that just want to slash and burn expenses for all existing IT. All of these unholy alliances create these negative choices.

In the long run, having a vendor agreement and relationship that forces them to be continuously pleasing to you and having agreements that you can walk away from at a moment's notice -- both technologically and from a business perspective -- is a completely new way of looking at this cloud market. WSO2 has a unique offering in this regard. So it’s certainly worth a look. That's my perspective.

Gardner: I'm afraid we'll have to leave it there. I want to thank you both very much. We've been discussing how enterprises and developers should be considering the concept of application fungibility, both in terms of technical enablers and standards, in order to best understand what their real potential cost over time would be for enjoying the best of cloud, but also looking out for the inevitable risks in a commercial environment, and avoiding potential lock-ins.

I want to thank our guests. We've been joined by Paul Fremantle, Chief Technology Officer and co-founder at WSO2. Thank you very much, Paul.

Fremantle: Thank you very much, Dana. It's been a fascinating discussion.

Gardner: And we've also been joined by Miko Matsumura, author of SOA Adoption for Dummies and an influential blogger and thought leader on cloud-computing topics. Thanks so much, Miko.

Matsumura: Great to be here. Thanks again for a great talk.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect Podcast. Thanks and come back next time.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: WSO2.

Transcript of a sponsored podcast discussion on open markets for cloud computing services and the need for applications that can move from one platform to another with relative ease. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: