Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

Wednesday, August 22, 2012

VMware CTO Steve Herrod on How the Software-Defined Datacenter Benefits Enterprises

Transcript of a BriefingsDirect podcast on how pervasive software enablement helps battle IT datacenter complexity.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the intriguing concept of the software-defined datacenter. We'll look at how some of the most important attributes of datacenter capabilities and performance are now squarely under the domain of software enablement.

We'll see how those who are now building and managing datacenters are gaining heightened productivity, delivering far better performance, and enjoying greater ease in operations and management -- all thanks to innovations at the software-infrastructure level.

A top technology leader at VMware, Steve Herrod has championed this vision of the software-defined datacenter and how the next generation of foundational IT innovation is largely being implemented above the hardware. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We're here with him now to further explore how advances in datacenter technologies and architecture are, to an unprecedented extent, being driven primarily through software. Please join me in welcoming to BriefingsDirect, Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware. Welcome, Steve.

Steve Herrod: Thanks, Dana. It’s a great topic. I'm really looking forward to sharing some thoughts on it.

Gardner: We appreciate your being here. We've heard a lot over the decades about improving IT capabilities and infrastructure management, but it seems that many times we peel back a layer of complexity and we get some benefits, and we find ourselves like the proverbial onion, back at yet another layer of complexity.

Complexity seems to be a recurring inhibitor. I wonder if this time we're actually at a point where something is significantly different. Are we really gaining ground against complexity at this point?

Herrod: It’s a great question, because complexity is associated with IT and why we'll do it differently this time. I see two things happening right now that give us a great shot at this.

One is purely on expectations. All of the opportunities we have as consumers to work with cloud computing models have opened up our imagination as to what we should expect out of IT and computing datacenters, where we can sign up for things immediately, get things when we want them, and pay for what we use. All those great concepts have set our expectations differently.

A good shot

Simultaneously, a lot of changes on the technology side give us a good shot at implementing it. When you combine technology that we'll talk about with the loosened-up imagination on what can be, we're in a great spot to deliver the software-defined datacenter.

Gardner: You mentioned cloud and this notion that it’s a liberating influence. Is this coming from the technologists or from the business side? Is there a commingling on that concept quite yet?

Herrod: It’s funny. I see it coming from the business side, which is the expectation of an individual business unit launching a product. They now have alternatives to their own IT department. They could go sign up for some sort of compute service or software-as-a-service (SaaS) application. They have choices and alternatives to circumvent IT. That's an option they didn't have in the past.

Fundamentally, it comes down to each of us as individuals and our expectations. People are listening to this podcast when they want to, quickly downloading it. This also applies to signing up for email, watching movies, and buying an app on an app store. It's just expected now that you can do things far more agilely, far more quickly than you could in the past, and that's really the big difference.

Gardner: Steve, folks are getting higher expectations based on what they encounter on their consumer side of technology consumption. We see what the datacenters are capable of from the likes of Google and Facebook. Is it possible for enterprises to also project that sort of productivity and performance onto what they're doing, and maybe now that we've gone through an iteration of these vast datacenters, to do it even better?

Herrod: I have a lot of friends at Facebook, Zynga, and Google, running the datacenters there, and what’s exciting for me is that they have built a fully software-defined datacenter. They're doing a lot of the things we are talking about here. But there are two unique things about their datacenters.

When you go into the business world, they don't have legions of people to run the infrastructure.



One is that they have hundreds or even thousands of PhDs who are running this infrastructure. Second, they're running it for a very specific type of application. To run on the Google datacenter, you write your applications a very specific way, which is great for them. But when you go into the business world, they don't have legions of people to run the infrastructure, and they also have a broad set of applications that they can’t possibly consider rewriting.

So in many ways, I see what we're doing is taking the lesson learned in those software-defined datacenters, but bringing it to the masses, and bringing it to companies to run all of their applications and without all of the people cost that they might need otherwise.

Gardner: Let’s step back for some context. How did we get here? It seems that hardware has been sort of the cutting edge of productivity, when we think of Moore’s Law and we look at the way that storage, networks, and server architecture have come together to give us the speeds and feeds that have led to a lot of what we take for granted now. Let’s go through that a little bit and think about why we're at a point where that might not be the case anymore.

Herrod: I like to look at how we got to where we are. I think that's the key to understanding where we're likely to go from here.

History of IT decisions

W
e started VMware out of a university, where we could take the time to study history and look at what had happened. I liked looking at existing datacenters. You can look through the datacenter and see the history of IT decisions of the past.

It's traditionally been the case that a particular new need led the IT department to go out and buy the right infrastructure for that new need, whether it’s batch processing, client/server applications, or big web farms. But these individually made decisions ended up creating the silos that we all know about that exist all over datacenters.

They now have the group that manages the mainframe, the UNIX administration group, and the client PC group, and none of them is using common people or common tools as much as they certainly would like to. How we got to where we are were isolated decisions for the right thing at the right time, without recognizing the opportunity to optimize across a broader set of the datacenter.

The whole concept of software-defined datacenters is looking holistically at all of the different resources you have and making them equally accessible to a lot of different application types.

Gardner: Earlier, I used the metaphor of an onion. You peel back complexity and you get more. But when it comes to the architecture of datacenters, it seems that the right comparison might be a snowball, which is layered on another layer, or it has been rolling and gathering as it goes, but not rationalized, not looked at holistically.

Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology.



Are there some sorts of imperatives now that are driving people to do that? We talked about the cloud vision, but maybe it’s security, maybe it’s the economics, maybe it’s the energy issues, or maybe it's all those things together.

Herrod: It’s a little of each. First of all, I like the onion analogy, because it makes you cry, and I think that’s also key. But it’s a combination of requirements coming in at the same time that's really causing people to look at it.

Going back to the original discussion, it starts with the fact that there are choices now. Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology, whether it's using Dropbox instead of the file servers that the company has, buying their own device and bringing it in, or just signing up for Amazon EC2, instead of using their local datacenter. These are all examples of them being able to go around IT.

But what often happens subsequently is that, when a security problem happens, when you realize that you are not in compliance, IT is left holding the bag. So we get an environment here where the user demand can be handled other ways, but IT has to be able to compete with those.

We have to let IT be a service provider and be able to be as responsive with those, so that they can avoid people going around them. But they still need to be responsible to the business when it comes time to show that Sarbanes-Oxley (SOX) compliance is appropriate or to make sure that your customer records aren’t leaked out to everyone else on the Internet.

That unique balance between the user choice and IT control is something we've all seen over the last several decades, and it’s showing up again at an even larger state.

New competition


Gardner: As you pointed out, Steve, IT isn’t just competing against itself. That is to say, maybe a 5 percent or 10 percent improvement over how well it did last year will be viewed as very progressive. But they're competing now against other datacenter architects. Maybe it’s a SaaS provider, maybe it’s a cloud provider, maybe it’s managed service provider (MSP) or telco that's now offering additional services.

We're really up against this notion that if you don’t architect your datacenter with that holistic software-defined mentality, and someone else does that, you're in trouble.

Herrod: It’s a great point. There are rate cards now for what you can use something else for. You might pay 7 cents per hour for this, or "this much" per transaction. IT departments in general have not traditionally had a good way of, first, even knowing how much they are costing, but second, optimizing to be competitive. So there's this awareness now of how much I'm spending and how long it takes. These metrics are causing this.

Gardner: Let’s revisit the context and the history here, looking at virtualization in particular. We've seen it extend beyond servers to data, storage, and also networking. Is this part of what you've got in your vision of software defined? Is it strictly virtualization, or does it encompass more? Help me understand how you've progressed in your thinking along these lines, particularly in regard to virtualization?

Herrod: We'll step back a little bit. VMware, over the last 13 years or so, has done a very good job of completely optimizing how servers are used in the datacenter. You can provision a new virtual machine (VM) in seconds. The cost has gone down in orders of magnitude. We've really done a good job on the compute and memory aspect of a datacenter.

It's absolutely crucial to look at the breadth of things that are involved in the datacenter.



But as you said, a couple of things have to happen from there. It's absolutely crucial to look at the breadth of things that are involved in the datacenter. We talk to customers now, and often they say, "Great, you've just lowered the cost and time taken to provision a new server. But when I put this in production, by the way, I care what LUN it ends up on, I have to look at what VLAN is there, and if it's in the right section of my firewall setup."

It might take seconds to provision a VM, but then it takes five days to get the rest of the solutions around it. So we see, first of all, the need to get the entire datacenter to be as flexible and fast moving as the pure server components are right now.

Again, if you look at the last couple of years, I would rate the industry -- ourselves and others -- as moving forward quite well on the storage side of things. There are still some things to do for sure, but storage, for the most part, has gotten a good head start on being fully virtualized and automated.

The big buzz around the industry right now has been the recognition that the network is the huge remaining barrier to doing what you want in your datacenter. Plenty of startups and all kinds of folks are working on software-defined networking. In fact, that's what we use as the term for the software-defined datacenter, because as networking follows as this big inhibitor, you'll be opened up to having a truly planned datacenter solution in place.

Now, we can break that down a little bit. It's important to talk about the technology piece of this. But when I say software-defined, I really look at three phases of how software comes in and morphs this existing hardware that you have.

The first step

The first step is to abstract away what people are trying to use from how it is being implemented. That's the core of what virtual even means, separating the logical from the physical. It gives you hardware independence. It enables basic mobility and all sorts of other good things.

The second phase is when you then pool all of these abstracted resources into what we call resource pools. Anyone who uses VMware software knows that we create these great clusters of computing horsepower and we allow vMotion and mobility within it.

But you need to think about that same notion of aggregation of resources at the storage and networking levels, so they become this great pool of horsepower that you can then dole out quite effectively. So after you've abstracted and pooled, the final phase is how you now automate the handling of this. This is where the real savings and speed come from.

Once you have pools of resources, when a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly. Likewise, when it goes away, you should be able to remove it and put it back into the pool.

That's a bit of a mouthful, but that's how I see the expansion. It first goes from just compute into storage, networking, security, and the other parts of the datacenter. Then simultaneously, you're abstracting each of these resources, pooling them, and then automating them.

When a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly.



Gardner: What's really fascinating to me are the benefits you get by abstracting to a virtualization and software-defined level -- the ability to implement with greater ease -- but that comes with underlying benefits around operations and management.

It seems to me that you can start to dial up and down, demonstrate elasticity at a far greater level, almost at that data-center level, looking at the service-level agreements (SLAs) and the key performance indicators (KPIs) that you need to adhere to and defining your datacenter success through a business metric, like an SLA.

Does it ring true with you that we're talking about some real management and operational efficiencies, as well as implementation efficiencies?

Herrod: It is, Dana, and we talk about it a few different ways. The transformation of datacenters, as we got started, was all about cost savings and capital expenses in financial terms. Let's buy fewer servers. "Let's not build another datacenter."

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

But the second phase, and where most customers are today, is all about operational efficiency. Not only am I buying less hardware, but I can do things where I'm actually able to satisfy, as you said, the KPIs or the SLAs.

Doing even more


I
can make sure that applications are up and running with the level of availability they expect, with less effort, with fewer people, and with easier tools. And when you go from capital expense savings to operational improvements, you impact the ability for IT to do even more.

To take that one level further, whenever I hear people talk about cloud computing -- and everyone talks about this with all sorts of different impressions in mind -- I think of cloud as simply being about more speed. You can do something more quickly. You can expand something more quickly. And that's what this third phase after capital and operational savings is about, that agility to move faster.

As businesses’ success ties so closely to how IT does, the ability to move faster becomes your strategic weapon against someone else. Very core to all this is how can we operate more efficiently, while satisfying the specific needs of applications in this new datacenter.

Gardner: Another area that I hear about benefiting from this software defined datacenter is the ability to better reduce and manage risk, particularly around security issues. You're no longer dealing with multiple parties, like the group overseeing UNIX, the group overseeing PC, the group doing the x86 architectures. The likelihood for process cracks to develop and security issues to unfortunately crop up seem to be more likely under those circumstances.

But when you have got a more organized overview of management operations and architecting at a similar level, you can instantiate the best practices around security. Please address this issue of security as another fruit to be harvested from a software-defined datacenter.

Security means a lot of different things, and it has been affected by a number of different aspects.



Herrod: Security means a lot of different things, and it has been affected by a number of different aspects.

First of all, I agree that the more you can have a homogenous platform or a homogenous team working on something, the less variation and process you end up with, exactly as you said, Dana. That can allow you to be more efficient.

This is a replacement for the traditional world of ITIL, where they had to try to create some standard across very different back ends. That's a natural progression for getting rid of some of the human errors that come into problems.

A more foundational thing that I am excited about with the software-defined datacenter is how, rather than security being these physical concepts that are deployed across the datacenter today, you can really think of security logically as wrapping up your application. You can do some pretty interesting new things.

A quick segue on that -- the way most security works in datacenters today is through statically placed appliances, whether they're firewalls, intrusion detection, or something else. Then the onus is on you to fit your application in the right part of the datacenter to get the right level of protection that you have, and hopefully it doesn’t move out of that protection zone.

Follows the application

What we're able to deliver with the software-defined datacenter is a way that security is a trait associated with the application, and it essentially wraps and follows the application around. You've virtualized your firewall and you've built it into the fabric of how you're automating deployments. I see that as a way to change the game on how tight the security can be around an application, as well as making sure it's always around there when you deploy it.

Gardner: For end users the proof is in how they actually consume, relate to, and interact with the applications. Is there something about the applications specifically that the software-defined datacenter brings, a higher level of user productivity benefits? What's really going to be noticeable for the application level to end users?

Herrod: That's a great question. I'm an infrastructure guy, as are probably many people listening here, and it’s easy to forget that infrastructure is simply a means to an end. It's the way that you run applications that ultimately matters. So you have to look at what an application is and what its ideal state looks like. The idea of the software-defined datacenter is to optimize that application experience.

That very quickly translates into how quickly can I get my application from the time I want it until it's running. It dictates how often this application is up, what kind of scale it can handle as more people come in, and how secure it is. Ultimately, it's about the application. I believe the software-defined datacenter is the way to optimize that application experience for all the users.

Gardner: Steve, how about not just repaving cow paths in terms of how we deploy existing types of applications. Is there something inherent in a software-defined datacenter benefit that will work to our advantage on innovative new types of applications?

We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual.



They could be for high performance computing, big data and analytics, or even when we go to mobile and we have location services folded into some of the way that applications are served up, and there is sort of a latency sensitive portion to this. Are there new types of apps that will benefit from this software-defined architecture?

Herrod: This is one of the most profound parts, if we get it right. I've been talking about can we collapse the silos that were created. Can we get all of our existing apps onto this common platform? We're doing quite well on that. We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual, which is pretty amazing. But that also means there is 40 percent that aren’t. So I spend a lot of time understanding why they might not be today.

Part of it is that just as businesses get more comfortable and get there, their business critical apps will get onto the system, and that's working well. But there are applications that are emerging, as you talked about, where if we're not careful, they'll create the next generation of silos that we'll be talking about 10 years from now.

I see this all the time. I'll visit a company that has a purely virtualized pool, but they have also created their grid for doing some sort of Monte Carlo simulations or high-performance computing. Or they have virtualized everything except for their unified communication environment, which has a special team and hardware allocated to it.

We spend quite a bit of time right now looking at the impediments to having those run on top of virtualization, which might be performance related or something else. Then going beyond impediments to how can we make them even better when they are run on top of the virtualized platform.

Great applications


Some of the really interesting things we're able to show now with our partners are things I would have never dreamed of as great candidates when we started the company. But we're able to satisfy very strict real-time requirements, which means we can run some great applications used in various sorts of stock trading, but also used in things like voice over IP (VoIP) or video conferencing.

Another big area that's liable to create the next round of silos, if we're not careful, is the big data and Hadoop world. Lots of customers are kicking the tires and creating special clusters and teams to work on that. But just recently, we've shown that the performance of Hadoop on top of vSphere, our virtualization platform, can be great.

We can even show that we can make it far easier to set up. We can make Hadoop more available, meaning it won’t crash as often. And we can even do things where we make it more elastic than it already is. It can suck up as many resources in the software-defined datacenter as it wants, when it needs them, but it can also give them all back when it's not using them.

It’s really exciting to look across all these apps. At this point, I don’t see a reason why we can't get almost any type app that we're looking at today to fit into the software-defined datacenter model.

Gardner: That’s exciting, when we don’t have any of the stragglers or large portions of business functions that are cast off. It seems to me that we've reached the capability of mirroring the entire datacenter, whether it’s for purposes of business continuity or disaster recovery (DR), or backup and recovery. It gives us the choice of where to locate these resources, not at the individual server, virtual machine level, or application level, but really to move the whole darn datacenter, if that’s important, without a penalty.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter.



For our last blue-sky direction with this conversation, are we at the point where we have fungibility, if you will, of datacenters, or are we getting to that point in the near future, where we can decide at a moment’s notice where we're going to actually put our datacenter, almost location independent?

Herrod: It’s a ways out, before we're just casually moving datacenters around, for sure. But I have seen some use cases today that are showing what's possible, and maybe I'll just give you a couple of examples.

DR has long been one of the real pains for IT to deal with. They have to replicate things across the country and keep two datacenters completely in sync, literally the same hardware, the same firmware layer, and all of that that goes into it.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter. We have seen many cases now, where you're able to failover your entire datacenter, effectively copying the whole datacenter over to another one, keeping the logical constructs in place, but hosting in a completely different area.

To get that right, your storage needs to be moved, your network identities need to be updated, and those are things that you can script and do in an automated way, once you've virtualized the whole datacenter.

Fun example


A
nother really fun example I see more and more now is, as mergers and acquisitions happen, we've seen several cases where one company buys another. They both had fully virtualized their datacenter and they could put on a giant storage drive the datacenter at one company and begin to bring it up on the other side, once they copied it over there.

So the entire datacenter isn't moved yet, but I think there are clear indications of once you separate out where something runs and how it runs from what you are really after, it opens up the door for a lot of different optimizations.

Gardner: We're coming up on the end of our time, but we also have the big annual VMworld show in San Francisco coming up toward the end of August. I know you can’t pre-announce anything, but perhaps you can give us some themes. We've talked about a lot of things here today, but is there any particular themes that we have hit on that you think are going to be more impactful or more important in terms of what we should expect at VMworld?

Herrod: It will be exciting as always. We have more than 20,000 people expected. What I'm doing here is talking about a vision and generalities of what's happening, but you can certainly imagine that what we will be showing there will be the realities -- the products that prove this, the partnerships that are in place that can help bring it forward, and even some use cases and some success stories.

You need to get to the point where you are leveraging the full automation and mobility that exists today.



So expect it to be certainly giving more detail around this vision and making it very real with announcements and demonstrations.

Gardner: Last question, if I'm a listener here today, I'm intrigued, and I want to start thinking about the datacenter at the software-defined level in order to generate some of the benefits that we have been discussing and some of the vision that we have been painting, what’s a good way to start? How do you begin this process? What are a few foundational directives or directions that you recommend?

Herrod: I think it can sound very, very disruptive to create a new software-defined datacenter, but one of the biggest things that I have been excited about in this technology versus others is that there are a set of steps that you go through, where you're able to get some value along the way, but they are also marching you toward where you ultimately end up.

So to customers who are doing this, presumably most of you have done some basic virtualization, but really you need to get to the point where you are leveraging the full automation and mobility that exists today.

Once you start doing that, you'll find that it obviously is showing you where things can head. But it also changes some of the processes you use at the company, some of the organizational structures that you have there, and you can start to pave the way for the overall datacenter to be virtualized, as you take some of these initial steps.

It’s actually very easy to get started. You can make benefits along the way. Your existing applications and hardware work. So that would be my real entreaty -- use what exists today and get your feet wet, as we deliver the next round heading forward.

Gardner: We've been talking about the intriguing concept of the software-defined datacenter and we've been exploring how advances in datacenter technologies and architectural benefits that are being driven through software innovation can provide a number of technological and business benefits.

Please join me now in thanking our guest, Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware. Thanks so much, Steve.

Herrod: Great. I've enjoyed the time, Dana. Thanks.

Gardner: My pleasure. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to our audience for reading and listening to our discussion, and don't forget to come back next time for the next edition of BriefingsDirect.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how pervasive software enablement helps battle IT datacenter complexity.
Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, August 16, 2012

Columbia Sportswear Extends Deep Server Virtualization to Improved ERP Operations, Disaster Recovery Efficiencies

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also see how it formed a foundation for improved disaster recovery (DR) best practices.

Stay with us now to learn more about how better systems make for better applications that deliver better business results. Here to share their virtualization journey is Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Welcome, Michael. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Michael Leeper: Good morning, Dana.

Gardner: We’re also here with Suzan Frye, Manager of Systems Engineering at Columbia Sportswear. Welcome to BriefingsDirect, Suzan.

Suzan Frye: Good morning, Dana.

Gardner: Let’s start with you, Michael. Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level? Then we’ll dig down into where that went and what it paid off.

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

As we were working through the requirements of that project with my teams, it became pretty clear for us that virtualization was the way we were going to make that happen. For various reasons, we set off on this path of virtualization for our primary data center, as we were working through issues surrounding multiple data centers and DR processes.

Our technologies weren't based on the physical world any more. We were finding more issues in physical than we were in virtual. So we started down this path to virtualize our entire production world. By that point, mid-2010 had come around, and we were ready to go. We had built our DR stack that virtualized our primary data centers taking us to the 80 percent to 90 percent virtual machine (VM) rate.

Extremely successful


We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

So it wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway. That’s what we were good at, and that’s where teams were staffed and skilled at. What we did was design the platform that we felt was going to meet our corporate standards and really meet our goals. For us that was running ERP on VMware.

Gardner: It sounds as if you had a good rationale for moving into a highly virtualized environment, but that it made it easier for you to do other things. Am I reading too much into it, or would you really say that your migration for ERP was much easier as a result of being highly virtualized?

It wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway.



Leeper: There are a couple of things there. Specifically in the migration to virtualization, we knew we were going to have to go through the effort of moving operating systems from one site to another. We determined that we could do that once on the physical side, relatively easily, and probably the same amount of effort as doing it once by converting physical to virtual.

The problem was that the next time we wanted to move services back from one facility to another in the physical world, we're going to have to do that work again. In the virtual space, we never had to do it again.

To make the teams go through the effort of virtualizing a server to then move it to another data center, we all need to do is do the work once. For my engineers, any time we get them to do the mundane stuff once it's better than doing it multiple times. So we got that effort taken care of in that early phase of the project to virtualize our environments.

For the ERP platform specifically, this was a net new implementation. We were converting from a JD Edwards environment running on IBM big iron to a brand-new SAP stack. We didn’t have anything to migrate. This was really built from scratch.

So we didn’t have to worry about a lot of the legacy configurations or legacy environments that may have been there for us. We got to build it new. And by that point in our journey, virtualized was the only way for us to do it. That’s what we do, it’s how we do it, and that's what we’re good at.

Across the board


Gardner: Just for the benefit of our audience, let’s hear a bit more about Columbia Sportswear. You’re manufacturing, distributing, and retailing. I assume you’re doing an awful lot online. Give us a sense of the business requirements behind your story around virtualization, DR, and ERP.

Leeper: Columbia Sportswear is based in Portland, Oregon. We're the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here.

Gardner: Let’s go to Suzan. Suzan, tell me a little bit about the pace at which you were able to embark on this virtualization journey. I saw some statistics that you went from 25 percent to 75 percent in about eight months which was really impressive, and as Michael pointed out, now over 90 percent. How did you get the pace and what was important in keeping that pace going?

Frye: The only way we could do it was with virtualization and using the efficiencies we gained with that. We centrally manage all of IT and engineering globally out of our headquarters in Portland. When we were given the initial project to move our data center and not only move our data center but provide DR services as well, it was a really easy sell to the business.

We could go to the business and explain to them the benefits of virtualization and what it would mean for their application. They wouldn’t have to rebuild and they wouldn’t have to bring in the vendor or any consultants. We can just take their systems, virtualize them, move them to our new data center, and then provide that automatic DR with Site Recovery Manager (SRM).

We had nine months to move our data center and we basically were all hands on deck, everybody on the server engineering team, storage, and networking teams as well. And we had executive support and sponsorship. It was very easy for us to go to the business market virtualization to the business and start down that path where we were socializing the idea. A lot of people, of course, were dragging their feet a little bit. We all know that story.

Once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us.



But once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us. We went from that 20 percent to 30 percent virtualization. We had about 75 percent when we were in the middle of our DR project, and today we’re actually at around 93 percent.

Gardner: One of the things I hear a lot from people that are doing multiple things with virtualization, like you did, is where to start, how to do this in the right order? Is there anything that you could come back with from your experience on how to do it in the order that incentivizes people to adopt, as you pointed out, but then also allows you to move into these other benefits in a way that compounds the return on investment (ROI)?

Frye: I think it surprises people that we have a "virtualize first" strategy today. Now it’s assumed that your system will be virtual and then all the benefits, the flexibility, the portability, the optimization, and the efficiencies that come with it.

But like most companies, we had to start with some of our lower tier or lower service-level agreement (SLA) systems, our development systems, and start working with the business on getting them to understand some of the benefits that they could gain by working with virtual systems.

Performance is there

Again people are always surprised. Will you have SQL virtualized? Do you have SAP virtualized? And the answer is yes, today we do, and the performance is there, the optimization is there, and that flexibility is there.

If you’re just starting out today, my advice would be to go ahead and start small. Give the business what they want, do it right, and give it the resources it needs to have. Don’t under-promise, over-deliver, and let the business start seeing the efficiencies that they can realize, and some of those hidden efficiencies as well.

We can support DR testing. We can support almost instant data refreshes, cloning, and snapping, so their upgrades are more seamless, and they have an easier back-out plan.

From an engineering and development perspective, we're giving them technologies that they could only dream of four or five years ago. And it’s really benefited the business in that we’re auto-provisioning. We’re provisioning in minutes versus days. We’re granting resources when needed.

It’s a more dynamic process for the business, and we’re really seeing that people are saying, "You’re not just a cost center anymore. You’re enabling us, you’re helping us to do what we need to do and basically doing it on-demand." So our team has really started shining these last few years, especially because of our high virtualization percentage.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it



Leeper: For a company that's looking to move to this virtualization space, they’ve got to get some wins. You’ve got to tackle some environments or some projects that you can be successful at, and hopefully by partnering with some business users and business owners who are willing to take a little bit of a chance.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it. There are a lot of ways that the business, application vendors, and various things can throw some roadblocks in this.

Once you start chipping away at a couple of them and get above the easy stuff, go find one that maybe on paper is a little difficult, but go get that one done. Then you can very quickly point back to success on that piece and start working your way through the rest of them.

Gardner: Yeah, one of those roadblocks that you mentioned I've heard people refer to is issues around licensing and tracking and audits. How did you deal with that? Was that an issue for you when you got into moving onto a virtualized environment?

Leeper: Sure. It’s one of the first things that always comes up. I'm going to separate VMware and the VMware licensing from app and application licensing. On the application side of the house, it’s getting better today than it was two or three years ago when we started this process.

Be confident

You have to be confident in your ability to deal with vendors and demand support on virtualization layers, work with them to help them understand their virtual licensing packages, and be very confident in your ability to get there.

Early on, we had to just look at some vendors straight in the eye and tell them we were going to do this, because this was the best thing for our business, and they needed to figure out how to support us. In some cases, that's just having your team, when you call them support, not have to open with "We’re running this on a VM."

We know we can replicate and then duplicate things in the background when we need to, but sometimes you just have to be smart about how you engage application partners that may not be quite as advanced as we are and work through that.

On the VMware side, it came down to their understanding where our needs were and how to properly license some of the stuff and work through some of those complexities. But it wasn't anything we spent significant amount of time on.

Gardner: You both mentioned this importance of getting the buy-in on the business side and showing wins early, that sort of thing. Because it’s hard many times to put a concrete connection between something that happens in IT and then a business benefit, was there anything that you can think of specifically that benefited your business that you could then turn around and bring back and say, "Well that’s because we did X, Y, and Z with virtualization?"

I had the pleasure of calling the finance VP and informing him that his environments were ready.



Leeper: One of the cool ones we’ve talked about and used for one of our key wins involves our entire architecture obviously with virtualization being key to that.

We had a business unit acquire an SAP module, specifically the BPC for BW module. That was independent of our overall SAP project and they were being run out of a separate business group.

They came to IT in the very late stages of this purchase and said, "These are our needs and requirements," and it was a fairly intense set of equipment. It was multiple servers, multiple environments, kind of up and down the stack, and they were bringing in outside consultants to help them with their implementation.

The interesting thing was, they had spec'd their statement of work (SOW) with these consultants to not start for the 4 to 6 weeks, because they really believed that's how long it was going to take IT to get them their environments and their hardware, using some of their old understanding of IT’s capabilities.

And reality was that we could provide them their test and developement environments that they needed to start with these consultants within a matter of hours, not weeks, and we were able to do so. I had the pleasure of calling the finance VP and informing him that his environments were ready and they were just probably going to sit idle for the next 4-6 weeks until the consultants actually showed up, which surprised all sorts of people.

Add things later


W
e didn't have all their production capacities, but those are things we could add later. They didn’t need production capacity in the first month of the project anyway. So our ability to have that virtualized infrastructure and be able to rapidly deploy to meet business requirements is one of the really kind of cool things we can do these days.

Gardner: Suzan, you’ve mentioned that as an enabler, not a roadblock. So being able to keep up with the speed of business, I suppose, is the best way to characterize this?

Frye: Absolutely. Going back to SRM, another big win for us was, as we were rolling out on some of our Tier 1 mission-critical applications, it was decided by the business that they wanted to test DR. They were going down the path of doing that the old-fashioned way by backing up databases, restoring databases, and taking weeks to do that, days and weeks.

We said, "We think we have a better way with SRM and our replication technologies. We have that data here. Why don't you let us clone that data and stand it up for you?" Literally, within 10 seconds, they had a replica of their data.

So we were enabling them to do their DR testing with SRM, on demand, when they wanted to do that, as well as giving them the benefit of doing the faster cloning and data refreshes. That was just a day-to-day, operational activity that they had no idea we could do for them.

It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins.



It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins. It's going to specific business units and application owners and saying, "We think we have a better way. What do you think about this?" Once they got their hands on it, just looking at their faces was really a good moment for us.

Gardner: Sure, and of course, as an online retailer, having that dependability that DR provides has to be something that lets you sleep a little better at night.

Frye: Just a little bit.

Gardner: Let's talk a little bit about where you go now. Another thing that I often hear in the market is that the benefits of virtualization are ongoing. It's a journey that keeps providing milestones. It doesn't really end.

Do you have any plans around private cloud perhaps, getting more elasticity and fit-for-purpose benefits out of your implementations? Perhaps you're looking to bring other applications into the fold, or maybe you’ve got some other plans around delivering on business applications at lower cost.

So where do you go next with your virtualization payoff?

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization.

We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road.

There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.



There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.

We were looking at how we can possibly move our data out of the country for a period of time, while the infrastructure was stabilizing, specifically power, and then maybe bring it back when things settle down again.

Unfortunately we weren't quite virtual on the edge yet there, but today we think that's something we could do. Thinking about how and where we move data to be at the right place at the right time is where we think the next big win for us.

Then, we get into the application profiles that users are asking for and their ability to spin up environments very quickly to just test something. It lets us get out of having IT as being the roadblock to innovation. A lot of times the business or part of our innovation teams come up with some idea on a concept, an application, or whatever it is. They don't have to wait for IT to fulfill their needs. The environments are right there for them.

So I challenge the teams routinely to think a little bit differently about how we've done things in the past, because our architecture is dramatically different than it was even two years ago.

Gardner: Well, great. We have to leave it there. We've been talking about how outerwear and sportswear maker, Columbia Sportswear has used virtualization technologies and models to improve their business operations. We’ve also seen how better systems makes for better applications that can deliver better business results.

So I’d like to thank our guests for joining this BriefingsDirect podcast. We have been here with Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Thank you so much, Michael.

Leeper: Thank you.

Gardner: And we have been joined by Suzan Frye, Manager of Systems Engineering, also there at Columbia Sportswear. Thanks to you, Suzan.

Frye: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you all audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, April 03, 2012

Expert Chat with HP on How IT Can Enable Cloud While Maintaining Control and Governance

Transcript of a sponsored BriefingsDirect podcast on how best to pursue cloud models.

View the full Expert Chat presentation on cloud adoption best practices.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP expert chat discussion on best practices for implementing cloud computing models.

You know, the speed of business has never been faster, and it's getting even faster. We’re seeing whole companies and sectors threatened by going obsolete due to the fast pace of change and new kinds of competition.

Because of this accelerating speed in business, managing change has become a top priority for many corporations.

And cloud computing has sparked the imagination of business leaders, who see it as a powerful new way to be innovative and gain first-mover advantages -- with or without traditional IT's consent.

This now means that the center of gravity for IT services is shifting toward the enterprise’s boundaries – moving increasingly outside their firewalls. And so how can companies have it both ways -- exploit cloud's promise but also retain rigor and control that internal IT influences and governance enables?

This is Dana Gardner, Principal Analyst at Interarbor Solutions. To help answer these crucial questions about how to best pursue cloud models, I recently moderated an HP Expert Chat session with Singapore-based Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization.

This now means that the center of gravity for IT services is shifting toward the enterprise’s boundaries.

[Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In our discussion now, you’ll hear the latest recommendations for how IT can support cloud benefits with low risk, and how IT can quickly take on the role of cloud benefits enabler.

We'll look first at why the industry is changing so rapidly due to consumer and business trends. These are forcing cloud and hybrid computing into the mindset of IT, and it's now becoming really a rethinking of data centers as a result.

The modern data center, it turns out, has to serve many masters. It has to be flexible, it's really a primary tool for business, and it needs to be built to last and to serve over a long period of time with ongoing agility, dependability, and manageability.

Difficult Trick

As you begin to cloud-enable your data centers, you'll recognize that you need to pull off a very difficult trick, especially nowadays, and that’s to be both increasing your organization’s speed and agility, but also at the same time reducing cost. This is a very difficult combination, but it's absolutely essential.

The cloud-enabled data center, therefore, is going to require a long journey, but in addition to making these strategies, put them in place and execute on them, you need to demonstrate the economic advantages along the way. That’s important to get the trust and allegiance of the business and the end users who are depending on these services. So proper planning and project management are essential and managing the expectations of users and business leaders is critical.

Speed to serious progress is a given these days in business, because businesses are under so much pressure to adapt and think first of advantages in their respective verticals. In their regions, they are under a lot of competitive pressure. They are also under pressure to show better economic performance themselves.

Disruptions that we are seeing are exacerbated by the slack economy, the requirements around data explosion and information management, and what we now know and refer to as big-data analysis.

Further driving this need for change and adaptability is a big push to mobile computing, and even the increased social interactions that we're seeing in the marketplace and that are having a profound effect, things like social networks and sharing and learning more about business through these interactions.

We're seeing whole companies and sectors grow obsolete these days, if they don't keep up with the new trends.



So the increased speed of business is building a sense of urgency and risk, and the risk is that you don't perform well in the market. But there's a secondary risk, and that is, if you move too quickly to cloud, if you don’t do it properly, it could end up being a problem that reduce whatever the benefits of cloud computing are.

We're seeing whole companies and sectors grow obsolete these days, if they don't keep up with the new trends. We've seen many companies rise and fall rapidly, and it's important to build on the speed, but not so fast that you break the mechanisms and have an insufficient platform support capability in the process.

What's prompting this is businesses looking for innovation, or sometimes, in some cases many times, going around IT, starting to adopt cloud, the software-as-a-service (SaaS) applications and services outside of IT’s knowledge. They think that the cloud drive gets them better results, but it can actually spawn complexity and sprawl, both unintended consequences.

Indeed, Forrester Research reports that business groups are adopting 2.5 times faster than the typical organization's IT groups. This, says Forrester, creates supplier sprawl as procurement of cloud services by these business groups remains separate and beyond the control of IT.

This can create quite a mess, and we have seen instances of this in the past when technologies are adopted without IT’s consent, and it means a mess for CIOs. They need to measure and integrate how these services can be brought in, both to work with existing and legacy application data and platforms, as well as to then bring in the hybrid services enablement.

Low-risk fashion

So the onus is really on the IT people to enable cloud adoption, but to try to do it in a managed, low-risk fashion. This require a discipline, and it also requires flexibility and adaptation to the cultural norms of the day.

Cloud enablement needs to be built in, not just at the technological level, but in ways that business and technology processes are developed, and this means IT thinking anew about being a service enabler, being a service broker, and being in a role of a traffic cop, if you will, allowing what can and can't be used. Not just say no, but learning to say yes, but providing it in a safe fashion. This is what we now refer to as a hybrid services broker function.

It's by no means too late to master services in cloud management, and it's not too early to begin to get really strategic in terms of shaping how the organizations react, thinking about data centers strategically, planning for a hybrid services delivery capability, recognizing also that the way that IT is supported financially is going to change.

People are going to pay as they use, and they're going to look for really good efficiency automation and management, a fixed purpose approach to IT. That means high efficiency and high productivity. It's what businesses and consumers are demanding and it's what IT must deliver or run the risk of becoming obsolete itself.

Cloud, in effect, is forcing a focus and hastening of directive. So what's really been underway for sometime, services orientation, that includes a focus on service-oriented architecture (SOA), business services management, and an increased emphasis on process efficiency. Clear goals are gaining that agility and speed, lowering the total cost, and rethinking IT as a services-delivery function.

The key is to gain IT’s trust, to build IT’s trust, to keep cost coming down, and to show innovation and building success along the way.



We're going to see here today how gaining a detailed sense of where you are across your IT activities is now crucial to being able to navigate a services consumption model, which include private cloud, hybrid cloud, and ultimately a mixture of public clouds. But with existing data centers, they need to know exactly what they have and know the assets that they're going to need to support, and how that will interact and interoperate and essentially integrate with outside services as well.

The key is to gain IT’s trust, to build IT’s trust, to keep cost coming down, and to show innovation and building success along the way, providing businesses with that agility and speed that they are really looking for. It's a very difficult fit, but we're seeing success already in the field from early adopters. They're learning to support automation, elasticity, and that fit for purpose capability across more aspects of IT.

It makes services orientation a mantra that can pay off in terms of efficiency and management, and it also helps reduce that risk by allowing them to remain in control with risk governance management.

We're now going to hear from a HP expert about meeting these challenges and obtaining the payoffs, while making sure that the transition to cloud and data center transformation is done in a safe and managed away. Now is the time to begin making preparations for such successful cloud enablement of your data center.

View the full Expert Chat presentation on cloud adoption best practices.

With that, I would like to now introduce our speaker, Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization, based in Singapore. Welcome, Glenn.

Exciting environment

Glenn West: Hi. The Cloud is an incredibly exciting environment and it's changing things in quite incredible ways. We're going to be focused today on how cloud is enabling the data center.

In the data center today, there are quite a few challenges, both from the external world, as well as from internal changes. In the external space, there are regulatory risks, natural disasters, legal challenges, and obviously technologies are changing.

As Dana mentioned, whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center. They must adapt and transform. Internally, greater agility and consolidation are needed. Green initiatives to save money and cost are putting great pressure on change.

So all of these things are causing the data center to converge, and this convergence is pushing the cloud.

What is a data center? In HP we have a very holistic approach. We'll start at the bottom from the facility point of view -- the location, the building, the mechanical and the electrical. Data center densities are growing quite rapidly and electrical costs are changing incredibly fast and rising. So the facility is very important in the operational cost of the data center.

Whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center.



Next, we go to the more traditional component, the actual infrastructure -- the server, the storage, the networking, both the physical and the virtual component of this. Then, on top of that, the part that drives the business, the applications and the information, and this is incredibly mixed. You have legacy applications, you have internally developed custom applications, as well as the more common ones.

On top of these, you have other facilities, such as critical systems, middleware, data warehouses and big data that are forcing changes. Data is growing very, very rapidly, and the ability to analyze this data is growing rapidly as well.

We next look at the management and operations. If data centers change, the management and the efficient operation become even more important. Then, controlling, governing, and the organization have key parts. Without having the right organizational structure it's very difficult to manage your clouds.

Some people view cloud computing as a fantastic miracle and some people view it as a fact. But actually cloud momentum has been moving quite rapidly, to the point that the whole population is using cloud on a routine basis. Most people are exposed as users to cloud via Amazon, or Facebook. Obviously, there are different types of clouds, and the ones I just mentioned are public clouds.

The next type is private cloud. An organization often has traditional IT. So some people ask if cloud computing is the next dot-com. In reality, cloud computing is an irresistible force. It's moving forward, and things are changing.

Scalable and elastic

So what does cloud mean? Cloud means going to a more service-driven model. It's scalable and it's elastic. Think about the public cloud space. How do you handle it when something is very, very popular? One day, it may have a hundred users and the next day it becomes the next hot thing for that instant in time. Then, the demand goes away.

If we use a traditional model, we can’t afford to have the infrastructure, but this pay-per-use is the foundation of cloud. We start looking at a service concept delivered and consumed over the Internet as needed.

The key word that keeps coming up is service, service orientation, the elasticity and the pay-per-use. Clouds ideally are multi-tenant. That can be within a company or outside a company.

Let's zoom to the next level and start with the private cloud. This is an internal client base. Think about it, for example, as a large company that has a hundred business units. Each business unit is a consumer of services. It's value-based and customized, and this is different than a public cloud.

A public cloud is a huge client base. You're talking about tens of millions or hundreds of millions of the potential subscribers in public clouds. It's very efficient, very data-driven, and based on large volumes.

It's radically different than traditional IT. You move away from managing servers, and you manage services.



Now the part in the middle, the hybrid, is a unique mix. I have a process that happens once in a blue moon, so I really want IT facility for it. The hybrid is a mix of public and private clouds to get even greater elasticity.

In the private cloud, all that is inside the company or inside the firewall, and as a cloud provider to your internal business unit, you start getting infrastructure pools. So you start seeing standardization.

Cloud is for automation, orchestration, automating control and a service catalog. All of a sudden, instead of calling somebody and saying, "I need this done," you have a portal. You say, "I want a SharePoint site," and boom. It’s created.

It's radically different than traditional IT. You move away from managing servers, and you manage services. In a data center, over the next couple of years, focus is going to be on private clouds. There will be public cloud providers for certain things, but the focus is going to be on the private side.

Private will slowly push into a hybrid and then slowly adds additional from the cloud services. The majority initially a private cloud will be infrastructure as a service.

The key drivers of this are agility and speed. When a business unit says they need it tomorrow, they're not joking. The agility that a private cloud provides solves a lot of opportunity in the business. It also solves the pressure of going into a public cloud supplier and getting it outside the IT framework.

Management and processing

T
he challenges over the next few years are management and the processing. How do we fund and charge back the whole business model concept? Then, building the cloud service interface, the service description. All of this is before the technology. Cloud is more than just the technology. It’s also about people and process.

Only a small portion will fit in the cloud today, but things are rapidly moving. We were talking about the future. Look at the current sprawl that's occurring. If IT doesn’t get in the front, this probably will get worse. But if the cloud is managed properly, then IT sprawl can be reduced, controlled, and slowly moved into a more standardized structure.

This is a journey. It won't happen overnight. IT sprawl is consuming 70 percent for operations and maintenance versus innovation which is only 30 percent. Something is wrong. This should be the other way around, and cloud provides a solution to start reversing this process. It's best when you have 70 percent in innovation and 30 percent in operations.

As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery. Hybrid delivery is when you start looking at the different ways of providing services, whether they are outsourced, cloud private-based, or publicly-based.

You start looking at becoming a service broker, which is the point at which you say that for this particular service, it makes best sense to pay it here. Then you start looking to manage it and be able to fully optimize your services.

As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery.



Going further out into 2015, 18 percent of all IT delivery will be public cloud, 28 percent will remain as private cloud, and the rest will be in-house or outsourced. You can see the rapid change going forward.

Gardner: What kind of applications do you think we are going to see? When you mention the service enablement, these different cloud models, I think people want to know what sorts of applications will be coming first in terms of applicability to these models?

West: If you're referring to public cloud, the first ones a lot of times are collaboration applications. Those were the first ones that moved into the public cloud space. Things like SharePoint, email, calendaring applications were the early adopter models.

Later we have seen CRM applications move. Slowly but surely, you're seeing more and more application types, especially when you start looking at infrastructure as a service (IaaS). It’s not so much the type of application, but the type of application load.

As you see, the traditional model is all about selling products, fixed costs, fixed assets. Everything is fixed. But when you start looking at a service model, it’s more pay-per-use. It’s flexibility, it’s the choice, but also a bit of uncertainty. In the traditional model you have controls, but when you start looking at the service model, it’s all about adaptability and change.

Big gap

S
o there's a big gap here. On one side, we're all about things being fixed, and on the next side, we're moving to being cloud ready, to hybrid services, and hybrid service delivery. So how do we get across this great divide? We really need a bridge. We really need a way to move across this great divide and this big change.

The way we change this is through transformation. It's a journey. Cloud is not something that you can wake up one day and say, "We're going to have it executed instantly." You have to look at it as going through levels of maturity.

This maturity model starts at the bottom. Some organizations are already at the beginning of this journey. They've already started standardizing, or they may have started virtualizing, but it’s a process. You have to get to the point where you're looking at moving up. It’s not just about technology.

View the full Expert Chat presentation on cloud adoption best practices.

Obviously, you have to get to the point where you're consuming cloud services. If you look at the movement to cloud, you can look at it as pulling organizations into it. This is driven by the rapid adoption by the masters in cloud. There’s a great push from the business side. Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.

Also from the data center itself, there is a push. The IT sprawl, the difficulty in management, are all pushing towards cloud quite rapidly.

Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.



The question is, where are we now? Right now, a lot of companies are in this environment where they have started virtualizing. They've moved up a bit and they've started doing some optimization. So they're right at the edge of this.

But to move forward you need to look at changing more than just some of the technology. You also need to look at the people, the technology, and the process in order to bring organizational maturity to the point that it’s starting to be service enabled. Then, you're starting to leverage the agility of cloud.

If you are just simply virtualized, then guess what, you're not going to see the benefit that cloud offers. You need to increase in all of these areas.

Gardner: As we look at the continuum, how do organizations continue to cut costs while they're going about this transformation. As I pointed out, that's an essential ingredient to keeping the allegiance, trust, and support of IT going.

West: This journey is quite interesting. To a large degree, the cost optimization is built in. When you start the journey in the standardization process, you start reducing cost there. As you virtualize, you get another level of cost reduction. At each step, when you start going to a shared service model and a service orientation, you start connecting things to business. You start getting the IT concepts dealing with the business cost.

Further optimized

M
oving up to the point of elasticity, things are further optimized. This whole process is about optimization, and when you start talking about optimization, you're talking about driving down the costs.

Currently, between the beginning of this journey and the end of this journey, we're reducing cost as we go. Each stage of this is another level of cost reduction.

We mentioned that the cloud isn't just about technology. Obviously, technology is part of it, but it's also about automation and self-service portals. The cloud is about speed. Imagine the old traditional process, you say, "Let me weigh the capital equipment required. Let me get that approved. Let me write the PO."

To get a server under the traditional system, I've seen organizations that take nine months. That's not agility. Agility is getting it in 90 seconds. You log into the portal and say, "I need a SharePoint Server," you're done.

As part of the process, you also have to get into standardization. You have to get into service lifecycle. A cloud that never throws anything away is not an optimized cloud. Having a complete service lifecycle, from beginning to end, is important.

In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control.



Usage and chargeback are key elements as well. Anything that's free always has a long queue. In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control. Having a way of allocating and charging back to the consuming parties, be it an internal customer or outside customer is very important.

Elements often forgotten in cloud are people and having a service orientation. If you look at a traditional IT organization, you have a storage manager and a network manager. If you look at cloud, you have service managers. The whole structure changes. It doesn't necessarily reduce roles or increase roles, but the roles are different. It's about relationship management, capacity management, and vendor management. These are different terms than traditional IT.

If you look at it moving from private cloud, what are the big changes, versus the lower level of maturity? Obviously getting into resource management, looking at standardizing process, getting some automation done, aligning the business, service catalog, self-service, and chargeback. These are the foundations of moving from level 2, where you have done some virtualization, into the beginning of implementation of private cloud.

So what can we do in private cloud? Obviously test and development is the perfect first item into private cloud. New services? Cloud is here. If you're implementing something new, it should be cloud focused.

When you start looking at large batch or batch processing needs, these are things that come and go. If I need some processing power now and I don't need it tomorrow, this really plays to be the key elements of cloud.

Opportunities for cloud


High performance computing, web services, database services, collaboration, high volume, frequently requested standardized and repeatable. That pretty well identifies those great opportunities for private cloud.

Now that we've talked about private cloud, how do we slowly move to more of a hybrid model? For the hybrid model, right off the bat, we need to start looking at adding public cloud services.

Once you start moving into public cloud, you need the ability to understand that things will scale a business, meaning that you need to look at the variability of cost. They need to be tied to the level of business.

Things like backup ability, interoperability and standards, and security are additional things that we need to look at as we move into public cloud services and the hybrid model.

Let's talk about the types of workloads. We need cloud for things that are dynamic, that go on and off at times. On every Monday I need to do this this application. It's going to consume significant resources once a month or once a quarter, or this project is going to run for a moderate amount of time and the demand is coming and going.

Things like backup ability, interoperabilities and standards, and security are additional things that we need to look at as we move into public cloud services and the hybrid model.



The next area that works really good is something that is growing very, very rapidly. Because of the elasticity of cloud, rapid growth is a fundamental ability of cloud. Application workloads that need to be able to grow very rapidly are ideal.

Predictability is another thing. If you have applications with an unpredictable load that works really well. Then, things that are periodic as well. Your fixed cost is low then.

Imagine you have a workflow that is running at 99 percent of the time. There are very few things like that in most organizations, but there are applications like this, and they're not fantastic for cloud.

Let's talk about the things that are pushable to cloud. First, core activities that are essential to the business are not suitable to go to cloud. Those are best in a private cloud. But if you start looking at things that are not unique, immediate, but not a differentiator or are cost-driven, then those are ideal for public cloud.

Basically core activities are very, very good for private cloud and less core activities or that are cost-driven are more ideal for a public cloud offering.

Lock-in and neutrality?

Gardner: Glenn, looking at this notion of moving things around in and out of private and public clouds, perhaps moving from a core and context decision process into actual implementation, what about standards, lock-in, and neutrality?

Where are we now in thinking about being able to move applications and services among and between clouds? What prevents us from getting locked into one cloud and not being able to move out?

West: Gartner actually did a study, and found that HP is one of the most open players in the industry, when it comes to cloud. A significant number of the public cloud suppliers actually use our equipment. We make a point of being totally open.

There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards. The cloud industry is moving fast, and if you look at cloud, it's about openness. If you have a private cloud then you cannot have the ability to burst to public cloud.

Guess what, that’s not a viable marketing offering. But the cloud industry as a whole, because of the interoperability requirements, has to be inherently open.

There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards.



Gardner: So it's not only important to pick the technologies, but it's very important to pick the partners when you start to get into these strategies?

West: That’s absolutely right. If the viewpoint of the company that you're getting your cloud from is to lock you in, then you're going to get locked in. But if the company is pushing hard to stay open, then you can see it, and there are plenty of materials available to show who is trying to do lock-in and who is trying to do open standards.

What do we need to think about here? Flexibility is obviously important. Interoperability -- and I think Dana nailed that one on the head -- being able to work across multiple standards is important. The cloud is about agility. Having a resource pool and workflow that can move around the resource pools on demand means that you have to have great interoperability.

View the full Expert Chat presentation on cloud adoption best practices.

Data privacy and compliance issues come into play, especially if we move from a private cloud into public cloud or hybrid offerings. Those things are important, especially on the compliance side, where the cloud supports data being anywhere.

Some requirements, depending on the industry, actually restrict the data movement. Skill-sets are important. Recovery and performance management, all of these things can be managed with the right automation and the right tools in cloud as well as the right people.

Greatest flexibility

We've talked about moving forward and now we're getting into the full IT service broker concept. This is where we have the greatest flexibility. One of the things you said very well was about dynamic sourcing. We can look at the workflow and we can push and share these workflows internally and externally to multiple cloud providers and act as a service broker, optimizing as we go.

You should have this even from a corporate point of view. You could be a service provider where you take those services and you broker and manage those services across multiple delivery methodologies.

At this point, you have to get at an organization very good at doing service-level agreement (SLA) management. SLAs, when you are growing cost and managing workflows through this is very important. When we start talking about going across multiple clouds, advanced automation gets to be very important as well.

As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services. So you have both, the physical as well as the virtual services and you have the appropriate mix. It’s changing practically on a daily basis, as business needs demand.

As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services.



Let's talk about these physical side and the changes in the data center. One of the things that looks quite interesting, if we look at resiliency, is that a lot of data centers are looking at moving further up the resiliency levels, and each level of this has significantly increased cost, practically exponential cost increase.

Once you implement cloud within your data center, you can get a lot more flexibility all of a sudden, because instead of building a single Tier 4 data center, using the efficiency of cloud, you could potentially build Tier 2 data center and have greater resiliency and greater agility.

The big change is in the way data center physical infrastructure is done, but the thing that's changing quite rapidly is density. If you look at it in a traditional data center, infrastructure is reasonably low to moderate density.

When you start looking at cloud enabled data center, high density is the norm. Greater efficiency, power, space, and cooling are all typical of cloud-enabled data centers. This true IT resource is where anything can run anywhere, and it becomes quite different.

The density change is radically different. The power per rack and cooling all change. The next thing with power is that even if you start looking at a traditional data centers, things such as structure, cabling, and power have to have flexibility and have to have the ability to change.

Orchestration also becomes important. If you start looking at a cloud-enabled data center, everything needs to scale. All the cost factors should scale with the amount of business.

Standardization and efficiency


T
he standardization level also changes as well. Standardizing configurations allows rapid redeployment of equipment. Finally, it’s efficiency -- this dynamic power and cooling during the work loads.

These are pretty radical changes from traditional data centers. Data centers are evolving. If you look at traditional data centers, they were quite monolithic -- one large floor, one large building, that’s pretty well it.

Slowly moving up to multi-tiered data centers, followed by flexible data centers that share resource utilization, and everything can change.

Most organizations when you start looking at the different areas, categories, and types of culture, the technology is there. If you looked at a company today, they will have different levels of maturity. This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry. The thing is that if you look at this example, different areas have different levels of maturity.

The problem is for cloud is that we need to look at something a little different. We need to get an even playing field across all of the areas so that the organizational maturity, the culture, the staff, the best practices are even in the level of maturity for cloud to work.

This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry.



If you bought the best technology, but you didn’t upgrade your governance or the culture, and you didn’t implement the best practices, it won’t work. The best infrastructure without proper service portfolio management, for example, just isn’t going to work. For cloud to work properly, you must actually look at increasing maturity across all areas of your data center, both the people and the process.

Some of the criteria for cloud include technology, consolidation, virtualization, management, governance, the people, process and services, and the service level. Managing the service level can often reduce your cost quite significantly in cloud.

In the process, adopting ITIL and looking at process automation and process management. These organizational structure and roles are quite different in cloud.

Think services. Understand what you have. Decide on what your core and your content are. What is the foundation of your business and what is something that we could start considering moving into public cloud sector?

Get your business units and your businesses on your side. Standardize and look at the automation of processes and explore the infrastructure conversion. Then look at introducing your portal and making sure you have charge back. Start with non-critical or green-field areas. Green field are your new activities. Then, slowly move into a hybrid approach.

Optimize further

Evolve, optimize, benchmark, cycle through -- and optimize further. HP has been doing this for a while. We did a very large transformation ourselves and out of that journey, we've created a huge amount of intellectual property. We have a Transformation Experience Workshop that helps organization understand what changes are needed. We can get people talking and get them moving, creating a vision together.

We have data-center services for looking at optimization, the physical change of data centers. And then we have comprehensive data center transformation services and road map. So get some action going. Let's start at doing the transformation.

A great way to do this is do it a one-day Cloud Transformation Experience Workshop. This is done in panels with key decision makers and it allows you to start building a foundation of how to go through this journey in transformation.

Gardner: Okay. Great. Well, we'll have to leave it there. I really want to thank our audience for joining us. I hope you found it as valuable as I did.

I also thank our guest, Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP expert chat discussion on best practices for cloud computing adoption and use.

Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

View the full Expert Chat presentation on cloud adoption best practices.

Transcript of a sponsored BriefingsDirect podcast on how best to pursue cloud models. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: