Showing posts with label disaster recovery. Show all posts
Showing posts with label disaster recovery. Show all posts

Tuesday, August 21, 2012

New Levels of Automation and Precision Needed to Optimize Backup and Recovery in Virtualized Environments

Transcript of a BriefingsDirect podcast on the relationship between increased virtualization and the need for data backup and recovery.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the relationship between increasingly higher levels of virtualization and the need for new data backup and recovery strategies.

We'll examine how the era of major portions of servers now being virtualized, has provided an on-ramp to attaining data lifecycle benefits and efficiencies. And at the same time, these advances are helping to manage complex data environments that consist of both physical and virtual systems.

What's more, the elevation of data to the lifecycle efficiency level is also forcing a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle across all applications and uses.

This is different from the previous and current system where it’s often a fragmented approach, with different oversight for data across far-flung instances and uses.

Lastly, our discussion focuses on bringing new levels of automation and precision to the task of solving data complexity, and of making always-attainable data the most powerful asset that IT can deliver to the business.

Here to share insights on where the data availability market is going and how new techniques are being adopted to make the value of data ever greater, we're joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. Welcome back, John. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

John Maxwell: Hi, Dana. Thanks. It’s great to be here to talk on a subject that's near and dear to my heart.

Gardner: Let’s start at a high level. Why have virtualization and server virtualization become a catalyst to data modernization? Is this an unintended development or is this something that’s a natural evolution?

Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, 5 or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.

Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.

Upped the ante

I
f you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.

It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.

There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage just as they've done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.

From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn't exist in a physical environment.

It’s really good news overall. Again, the hypervisor vendors are focusing on storage and so are companies like Quest, when it comes to protecting that data.

Gardner: As we move towards that mixed environment, what is it about data that, at a high level, people need to think differently about? Is there a shift in the concept of data, when we move to virtualization at this level?

First of all, people shouldn’t get too complacent.



Maxwell: First of all, people shouldn’t get too complacent. We've seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That's why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they're laid out on the physical disks.

And even hypervisor vendors -- I'm going to point back to Microsoft with Windows Server 2012 -- are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.

There's a lot of education going on. The hypervisor vendors, and certainly vendors like Quest, are stepping up to help IT understand how these logical virtual disks are laid out and how to best utilize them.

Gardner: It’s coming around to the notion that when you set up your data and storage, you need to think not just for the moment for the application demands, but how that data is going to be utilized, backed up, recovered, and made available. Do you think that there's a larger mentality that needs to go into data earlier on and by individuals who hadn’t been tasked with that sort of thought before?

See it both ways

Maxwell: I can see it both ways. At face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.

From that aspect, I don't think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.

That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.

For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.

Gardner: I understand that you've conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.

Our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.



Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.

That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.

This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.

I remember in the days of the mainframe, you'd say, "Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it." Today, people expect everything to be back in minutes or seconds.

The other thing that was interesting from the survey is that one-third of IT departments were approached by their management in the past 12 months to increase the speed of the recovery time. That really dovetails with the 50 percent of data being mission critical. So there's pressure on the IT staff now to deliver better service-level agreements (SLAs) within their company with respect to recovering data.

Terms are synonymous

The other thing that's interesting is that data protection and the term backup are synonymous. It's funny. We always talk about backup, but we don't necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.

Case in point: 73 percent of respondents, or roughly three quarters, now consider recovering lost or corrupted data and restoring those mission critical applications their top data-protection concern. Only 4 percent consider the backup window the top concern. Ten years ago, all we talked about was backup windows and speed of backup. Now, only 4 percent considered backup itself, or the backup window, their top concern.

So 73 percent are concerned about the recovery window, only 4 percent about the backup window, and only 23 percent consider the ability to recover data independent of the application their top concerns.

Those trends really show that there is a need. The beauty is that, in my opinion, we can get those service levels tighter in virtualized environments easier than we can in physical environments.

Gardner: We seem to have these large shifts in the market, one around virtualization of servers and storage and the implications of first mixed, and then perhaps a majority, or vast majority, of virtualized environments.

A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.



The second shift is the heightened requirements around higher levels of mission-critical allocation or designation for the data and then the need for much greater speed in recovering it.

Let's unpack that a little bit. How do these fit together? What's the relationship between moving towards higher levels of virtualization and being able to perhaps deliver on these requirements, and maybe even doing it with some economic benefit?

Maxwell: You have to look at a concept that we call tiered recovery. That's driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.

That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it's a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.

For example, it's really easy to say, "I'm going to mirror 100 percent of my data," or "I'm going to do synchronous replication of my data," but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.

Categorize your data

What you have to do is understand and categorize your data, and that's one of the focuses of Quest. We're introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it's replication, continuous data protection, traditional backup, snapshots, or a combination.

You can't just do this blindly. You have got to understand what your data is. IT has to understand the business, and what's critical, and choose the right solution for it.

Gardner: It's interesting to me that if we're looking at data and trying to present policies on it, based on its importance, these policies are going to be probably dynamic and perhaps the requirements for the data will be shifting as well. This gets to that area I mentioned earlier about the culture around data, thinking about it differently, perhaps changing who is responsible and how.

So when we move to this level of meeting our requirements that are increasing, dealing in the virtualization arena, when we need to now think of data in perhaps that dynamic fluid sense of importance and then applying fit-for-purpose levels of support, backup, recoverability, and so forth, whose job is that? How does that impact how the culture of data has been and maybe give us some hints of what it should be?

Maxwell: You've pointed out something very interesting, especially in the area of virtualization, just as we have noticed over the seven years of our vRanger product, which invented the backup market for virtualized environments.

What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage.



It used to be, and it still is in some cases, that the virtual environment was protected by the person, usually the sys admin, who was responsible for, in the case of VMware, the ESXi hypervisors. They may not necessarily have been aligned with the storage management team within IT that was responsible for all storage and more traditional backups.

What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage. So it's not this thing that’s sitting over on the side and someone else does it. As I said earlier, virtualization is now such a large part of all the data, that now it's moving from being a niche to something that’s mainstream. Those people now are going to put more discipline on the virtual data, just as they did the physical.

Because of the mission criticality of data, they're going from being people who looked at data as just a bunch of volumes or arrays, logical unit numbers (LUNs), to "these are the applications and this is the service level associated with the applications."

When they go to set up policies, they are not just thinking of, "I'm backing up a server" or "I'm backing up disk arrays,", but rather, "I'm backing up Oracle Financials," "I'm backing up SAP," or "I'm backing up some in-house human resources application."

Adjust the policy

And the beauty of where Quest is going is, what if those rules change? Instead of having to remember all the different disk arrays and servers that are associated with that, say the Oracle Financials, I can go in and adjust the policy that's associated with all of that data that makes up Oracle Financials. I can fine-tune how I am going to protect that and the recoverability of the data.

Gardner: That to me brings up the issue about ease of use, administration, interfaces, making these tools something that can be used by more people or a different type of person. How do we look at this shift and think about extending that policy-driven and dynamic environment at the practical level of use?

Maxwell: It's interesting that you bring that up too, because we've had many discussions about that here at Quest. I don't want to use the term consumerization of IT, because it has been used almost too much, but what we're looking at is, with the increased amount of virtual data out there, which just adds to the whole pot of heterogeneous environments, whether you have Windows and Linux, MySQL, Oracle, or Exchange, it's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.

We want to make it as easy to back up and recover a database as it is a flat file. The fine line that we walk is that we don't want to dumb the product down. We want to provide intuitive GUIs, a user experience that is a couple of clicks away to say, "Here is a database associated with the application. What point do I want to recover to?" and recover it.

If there needs to be some more hands-on or more complicated things that need to be done, we can expose features to maybe the database administrator (DBA), who can then use the product to do more complex recovery or something to that effect.

It's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.



We've got to make it easy for this generalist, no matter what hypervisor -- Hyper-V or VMware, a combination of both, or even KVM or Xen -- which database, which operating system, or which platform.

Again, they're responsible for everything. They're setting the policies, and they shouldn't have to be qualified. They shouldn't have to be an Exchange administrator, an Oracle DBA, or a Linux systems administrator to be able to recover this data.

We're going to do that in a nice pretty package. Today, there are many people here at Quest who walk around with a tablet PC as much as they do with their laptop. So our next-generation user interface (UI) around NetVault XA is being designed with a tablet computing scenario, where you can swipe data, and your toolbar is on the left and right, as if you are holding it using your thumb -- that type of thing.

Gardner: So, it's more access when it comes to the endpoint, and as we move towards supporting more of these point applications and data types with automation and a policy-driven approach or an architecture, that also says to me that we are elevating this to the strategic level. We're looking at data protection as a concept holistically, not point by point, not source by source and so forth.

Again, it seems that we have these forces in the market, virtualization, the need for faster recovery times, dealing with larger sets of data. That’s pushing us, whether we want to or even are aware of it, towards this level of a holistic or strategic approach to data.

Let me just see if you have any examples, at this point, of companies that are doing this and what it's doing for them. How are they enjoying the benefits of elevating this to that strategic or architecture level?

Exabyte of data

Maxwell: We have one customer, and I won't mention their name, but they are one of the top five web properties in the world, and they have an exabyte of data. Their incremental backups are almost 500 petabytes, and they have an SLA with management that says 96 percent of backups will run well, because they have so much data that changes in a week’s time.

You can't miss a backup, because that gets to the recoverability of the application. They're using our NetVault product to back up that data, using both traditional methods and integrated snapshots. Snapshot was on the technology tier as far as having tiered recovery scenario. They used NetVault in conjunction with hardware snapshots, where there is no backup window. The backup to the application is, for all practical purposes, instantaneous.

Then, they use NetVault to manage and even take that data that’s on disk and eventually move it to tape. The snapshots allow them to do that very quickly for massive amounts of data. And by massive amounts of data, I'm talking 100 million files associated with one application. To put that back in place at any point in time very quickly with NetVault orchestrating that hardware snapshot technology, that’s pretty mind blowing.

Gardner: That does give us a sense of the scale and complexity and how it's being managed and delivered.

You mentioned how Quest is moving towards policy-driven approaches, improving UIs, and extending those UIs to mobile tier. Are there any other technology approaches that Quest is involved with that further explain how some of these challenges can be met? I'm very interested in agentless, and I'm also looking at how that automation gets extended across more of these environments.

We're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.



Maxwell: There are two things I want to mention. Today, Quest protects VMware and Microsoft Hyper-V environments, and we'll be expanding the hypervisors that we're supporting over the next 12 months. Certainly, there are going to be a lot of changes around Windows Server 2012 or Hyper-V, where Microsoft has certainly made it a lot more robust.

There are a lot more things for us exploit, because we're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.

We want to take care of that, mask some complexity and allow people to possibly have cross-hypervisor recoverability. So, in other words, we want to enable safe failover of a VMware ESXi system to Microsoft Hyper-V, or vice versa..

There's another thing that’s interesting and is a challenge for us and it's something that has challenged engineers here at Quest. This gets into the concepts of how you back up or protect data differently in virtual environments. Our vRanger product is the market leader with more than 40,000 customers, and it’s completely agentless.

As we have evolved the product over the past seven years, we've had three generations of the product and have exploited various APIs. But with vRanger, we've now gone to what is called a virtual appliance architecture. We have a vRanger service that performs backup and replication for one or hundreds of VMs that exist either on that one physical server or in a virtual cluster. So this VM can even protect VMs that exist on other hardware.

Scalability

The beauty of this is first the scalability. I have one software app that’s running that’s highly controllable. You can control what resources are replicating, protecting, and recovering all of my VMs. So that’s easy to manage, versus having to have an agent installed in every one of those VMs.

Two, there's no overhead. The VMs don’t even know, in most cases, that a backup is occurring. We use the services, in the case of VMware, of ESXi, that allows us to go out there, snapshot the virtual volumes called VMDKs, and back up or replicate the data.

Now, there is one thing that we do that’s different than some others. Some vendors do this and some don’t, and I think one of those things you have to look at when you choose a virtual backup or virtual data protection vendor is their technical prowess in this area. If you're backing up a VM that has an application such as Exchange or SharePoint, that’s a live application, and you want to be able to synchronize the hypervisor snapshot with the application that’s running.

There’s a service in Windows called Volume Shadow Copy Service, or VSS for short, and one of the unique things that Quest does with our backup software is synchronize the virtual snapshot of the virtual disks with the application of VSS, so we have a consistent point-in-time backup.

To communicate, we dynamically inject binaries into the VM that do the process and then remove themselves. So, for a very short time, there's something running in that VM, but then it's gone, and that allows us to have consistent backup.

One of the beauties of virtualization is that I can move data without the application being conscious of it happening.



That way, from that one image backup that we've done, I can restore an entire VM, individual files, or in the case of Microsoft Exchange or Microsoft SharePoint, I can recover a mailbox, an item, or a document out of SharePoint.

Gardner: So the more application-aware the solution is, it seems the more ease there is in having this granular level of restore choices. So that's fit for purpose, when it comes to deciding what level of backup and recovery and support for the data lifecycle is required.

This also will be able to fit into some larger trends around moving a data center to a software level or capability. Any thoughts of how what you're doing at Quest fits into this larger data-center trend. It seems to me that it’s at the leading or cutting edge?

Maxwell: One of the beauties of virtualization is that I can move data without the application being conscious of it happening. There's a utility, for example, within VMware called vMotion Storage that allows them to move data from A to B. It's a very easy way to migrate off of an older disk array to a new one, and you never have to bring the app down. It's all software driven within the hypervisor, and it's a lot of control. Basically it’s a seamless process.

What this opens up, though, is the ability for what we're looking at doing at Quest. If there's a means to move data around, why can't I then create an environment where I could do DR, whether it's within the data center for hardware redundancy or whether it's like what we do here at Quest.

Replicate data


W
e replicate data amongst various Quest facilities. Then, we can bring up an application that was running in location A in point B, on unlike hardware. It can be completely different storage, completely different servers, but since they're VMs, it doesn’t matter.

That kind of flexibility that virtualization brings is going to give every IT organization in the world the type of failover capabilities that used to only exist for the Global 1000, where they used to have to set up a hot site or had to have a data center. They would use very expensive proprietary hardware-based replication and things like that. So you had to have like arrays, like servers, and all that, just to have availability.

Now, with virtualization, it doesn’t matter, and of course, we have plenty of bandwidth, especially here in the United States. So it’s very economical, and this gets back to our survey that showed that for IT organizations, 73 percent were concerned about recovering data, and that’s not just recovering a file or a database.

Here in California, we're always talking about the big one. Well, when the big one happens, whole bunches of server racks may fall over. In the case of Quest, we want to be able to bring those applications up in an environment that's in a different part of the country, with no fault zones and that type of thing, so we can continue our business.

Gardner: We just saw a recent example of unintended or unexpected circumstances with the Mid-Atlantic states and some severe thunderstorms, which caused some significant disruption. So we always need to be thoughtful about the unexpected.

Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud.



Another thing that occurred to me while you were discussing these sort of futuristic scenarios, which I am imagining aren’t that far off, is the impact that cloud computing another big trend in the market, is bringing to the table.

It seems to me that bringing some of the cloud models, cloud providers, service models into play with what you have described also expands what can be done across larger sets of organizations and maybe even subsets of groups within companies. Any thoughts briefly on where some of the cloud provider scenarios might take this?

Maxwell: It’s funny. Two years ago, when people talked about cloud and data protection, it was just considering the cloud as a target. I would back up the cloud or replicate the cloud. Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud and then maybe even replicate it or back it up back to on-prem, which is kind of a novel concept if you think about it.

If you host something up in cloud, you can back it up locally up there and then actually keep a copy on-prem. Also, the cloud is where we're certainly looking at having generic support for being able to do failover into the cloud and working with various service providers where you can pre-provision, for example, VMs out there.

You're replicating data. You sense that you have had a failure, and all you have to do is, via software, bring up those VMs, pointing them at the disk replicas you put up there.

Different cloud providers

Then, there's the concept of what you do if a certain percentage of all your IT apps are hosted in cloud by different cloud providers. Do you want to be able to replicate the data between cloud vendors? Maybe you have data that's hosted at Amazon Web Services. You might want to replicate it to Microsoft Azure or vice versa or you might want to replicate it on-premise (on-prem).

So there's going to be a lot of neat hybrid options. The hybrid cloud is going to be a topic that we're going to talk about a lot now, where you have that mixture of on-prem, off-prem, hosted applications, etc., and we are preparing for that.

Gardner: I'm afraid we're about out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on the relationship between increasingly higher levels of virtualization and the need for new backup and recovery strategies.

We've seen how solving data complexity and availability in the age of high virtualization is making always attainable data the most powerful asset that an IT organization can deliver to its users.

I'd like to thank our guest. We've been joined by John Maxwell, Vice President of Product Management and Data Protection at Quest Software.

The cloud is where we're certainly looking at having generic support for being able to do failover into the cloud.



John, would you like to add anything else, maybe in terms of how organizations typically get started. This does seem like a complex undertaking. It has many different entry points. Are there some best practices you've seen in the market about how to go about this, or at least to get going?

Maxwell: The number one thing is to find a partner. At Quest, we have hundreds of technology partners that can help companies architect a strategy utilizing the Quest data protection solutions.

Again, choose a solution that hits all the key points. In the case of VMware, you can go to VMware’s site and look for VMware Ready-Certified Solutions. Same thing with Microsoft, whether it’s Windows Server 2008 or 2012 certified. Make sure that you are getting a solution that’s truly certified. A lot of products say they support virtual environments, but then they don’t have that real certification, and a result, they can’t do lot of the innovative things that I’ve been talking about .

So find a partner who can help, or, we at Quest can certainly help you find someone who can help you architect your environment and even implement the software for you, if you so choose. Then, choose a solution that is blessed by the appropriate vendor and has passed their certification process.

Gardner: I should also point out that VMworld is coming up next week. I expect that you'll probably have a big presence there, and a lot of the information that we have been talking about will be available in more detail through the VMworld venue or event.

Maxwell: Absolutely, Dana. Quest will have a massive presence at VMworld, both in San Francisco and Barcelona. We'll be demonstrating technologies we have today and also we will be making some major announcements and previewing some real exciting software at the show.

Gardner: Well, great. This is Dana Gardner, Principal Analyst at Interarbor Solutions. I'd like to thank our audience for listening, and invite them to come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Transcript of a BriefingsDirect podcast on the relationship between increased virtualization and the need for data backup and recovery. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, August 16, 2012

Columbia Sportswear Extends Deep Server Virtualization to Improved ERP Operations, Disaster Recovery Efficiencies

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also see how it formed a foundation for improved disaster recovery (DR) best practices.

Stay with us now to learn more about how better systems make for better applications that deliver better business results. Here to share their virtualization journey is Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Welcome, Michael. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Michael Leeper: Good morning, Dana.

Gardner: We’re also here with Suzan Frye, Manager of Systems Engineering at Columbia Sportswear. Welcome to BriefingsDirect, Suzan.

Suzan Frye: Good morning, Dana.

Gardner: Let’s start with you, Michael. Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level? Then we’ll dig down into where that went and what it paid off.

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

As we were working through the requirements of that project with my teams, it became pretty clear for us that virtualization was the way we were going to make that happen. For various reasons, we set off on this path of virtualization for our primary data center, as we were working through issues surrounding multiple data centers and DR processes.

Our technologies weren't based on the physical world any more. We were finding more issues in physical than we were in virtual. So we started down this path to virtualize our entire production world. By that point, mid-2010 had come around, and we were ready to go. We had built our DR stack that virtualized our primary data centers taking us to the 80 percent to 90 percent virtual machine (VM) rate.

Extremely successful


We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

So it wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway. That’s what we were good at, and that’s where teams were staffed and skilled at. What we did was design the platform that we felt was going to meet our corporate standards and really meet our goals. For us that was running ERP on VMware.

Gardner: It sounds as if you had a good rationale for moving into a highly virtualized environment, but that it made it easier for you to do other things. Am I reading too much into it, or would you really say that your migration for ERP was much easier as a result of being highly virtualized?

It wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway.



Leeper: There are a couple of things there. Specifically in the migration to virtualization, we knew we were going to have to go through the effort of moving operating systems from one site to another. We determined that we could do that once on the physical side, relatively easily, and probably the same amount of effort as doing it once by converting physical to virtual.

The problem was that the next time we wanted to move services back from one facility to another in the physical world, we're going to have to do that work again. In the virtual space, we never had to do it again.

To make the teams go through the effort of virtualizing a server to then move it to another data center, we all need to do is do the work once. For my engineers, any time we get them to do the mundane stuff once it's better than doing it multiple times. So we got that effort taken care of in that early phase of the project to virtualize our environments.

For the ERP platform specifically, this was a net new implementation. We were converting from a JD Edwards environment running on IBM big iron to a brand-new SAP stack. We didn’t have anything to migrate. This was really built from scratch.

So we didn’t have to worry about a lot of the legacy configurations or legacy environments that may have been there for us. We got to build it new. And by that point in our journey, virtualized was the only way for us to do it. That’s what we do, it’s how we do it, and that's what we’re good at.

Across the board


Gardner: Just for the benefit of our audience, let’s hear a bit more about Columbia Sportswear. You’re manufacturing, distributing, and retailing. I assume you’re doing an awful lot online. Give us a sense of the business requirements behind your story around virtualization, DR, and ERP.

Leeper: Columbia Sportswear is based in Portland, Oregon. We're the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here.

Gardner: Let’s go to Suzan. Suzan, tell me a little bit about the pace at which you were able to embark on this virtualization journey. I saw some statistics that you went from 25 percent to 75 percent in about eight months which was really impressive, and as Michael pointed out, now over 90 percent. How did you get the pace and what was important in keeping that pace going?

Frye: The only way we could do it was with virtualization and using the efficiencies we gained with that. We centrally manage all of IT and engineering globally out of our headquarters in Portland. When we were given the initial project to move our data center and not only move our data center but provide DR services as well, it was a really easy sell to the business.

We could go to the business and explain to them the benefits of virtualization and what it would mean for their application. They wouldn’t have to rebuild and they wouldn’t have to bring in the vendor or any consultants. We can just take their systems, virtualize them, move them to our new data center, and then provide that automatic DR with Site Recovery Manager (SRM).

We had nine months to move our data center and we basically were all hands on deck, everybody on the server engineering team, storage, and networking teams as well. And we had executive support and sponsorship. It was very easy for us to go to the business market virtualization to the business and start down that path where we were socializing the idea. A lot of people, of course, were dragging their feet a little bit. We all know that story.

Once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us.



But once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us. We went from that 20 percent to 30 percent virtualization. We had about 75 percent when we were in the middle of our DR project, and today we’re actually at around 93 percent.

Gardner: One of the things I hear a lot from people that are doing multiple things with virtualization, like you did, is where to start, how to do this in the right order? Is there anything that you could come back with from your experience on how to do it in the order that incentivizes people to adopt, as you pointed out, but then also allows you to move into these other benefits in a way that compounds the return on investment (ROI)?

Frye: I think it surprises people that we have a "virtualize first" strategy today. Now it’s assumed that your system will be virtual and then all the benefits, the flexibility, the portability, the optimization, and the efficiencies that come with it.

But like most companies, we had to start with some of our lower tier or lower service-level agreement (SLA) systems, our development systems, and start working with the business on getting them to understand some of the benefits that they could gain by working with virtual systems.

Performance is there

Again people are always surprised. Will you have SQL virtualized? Do you have SAP virtualized? And the answer is yes, today we do, and the performance is there, the optimization is there, and that flexibility is there.

If you’re just starting out today, my advice would be to go ahead and start small. Give the business what they want, do it right, and give it the resources it needs to have. Don’t under-promise, over-deliver, and let the business start seeing the efficiencies that they can realize, and some of those hidden efficiencies as well.

We can support DR testing. We can support almost instant data refreshes, cloning, and snapping, so their upgrades are more seamless, and they have an easier back-out plan.

From an engineering and development perspective, we're giving them technologies that they could only dream of four or five years ago. And it’s really benefited the business in that we’re auto-provisioning. We’re provisioning in minutes versus days. We’re granting resources when needed.

It’s a more dynamic process for the business, and we’re really seeing that people are saying, "You’re not just a cost center anymore. You’re enabling us, you’re helping us to do what we need to do and basically doing it on-demand." So our team has really started shining these last few years, especially because of our high virtualization percentage.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it



Leeper: For a company that's looking to move to this virtualization space, they’ve got to get some wins. You’ve got to tackle some environments or some projects that you can be successful at, and hopefully by partnering with some business users and business owners who are willing to take a little bit of a chance.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it. There are a lot of ways that the business, application vendors, and various things can throw some roadblocks in this.

Once you start chipping away at a couple of them and get above the easy stuff, go find one that maybe on paper is a little difficult, but go get that one done. Then you can very quickly point back to success on that piece and start working your way through the rest of them.

Gardner: Yeah, one of those roadblocks that you mentioned I've heard people refer to is issues around licensing and tracking and audits. How did you deal with that? Was that an issue for you when you got into moving onto a virtualized environment?

Leeper: Sure. It’s one of the first things that always comes up. I'm going to separate VMware and the VMware licensing from app and application licensing. On the application side of the house, it’s getting better today than it was two or three years ago when we started this process.

Be confident

You have to be confident in your ability to deal with vendors and demand support on virtualization layers, work with them to help them understand their virtual licensing packages, and be very confident in your ability to get there.

Early on, we had to just look at some vendors straight in the eye and tell them we were going to do this, because this was the best thing for our business, and they needed to figure out how to support us. In some cases, that's just having your team, when you call them support, not have to open with "We’re running this on a VM."

We know we can replicate and then duplicate things in the background when we need to, but sometimes you just have to be smart about how you engage application partners that may not be quite as advanced as we are and work through that.

On the VMware side, it came down to their understanding where our needs were and how to properly license some of the stuff and work through some of those complexities. But it wasn't anything we spent significant amount of time on.

Gardner: You both mentioned this importance of getting the buy-in on the business side and showing wins early, that sort of thing. Because it’s hard many times to put a concrete connection between something that happens in IT and then a business benefit, was there anything that you can think of specifically that benefited your business that you could then turn around and bring back and say, "Well that’s because we did X, Y, and Z with virtualization?"

I had the pleasure of calling the finance VP and informing him that his environments were ready.



Leeper: One of the cool ones we’ve talked about and used for one of our key wins involves our entire architecture obviously with virtualization being key to that.

We had a business unit acquire an SAP module, specifically the BPC for BW module. That was independent of our overall SAP project and they were being run out of a separate business group.

They came to IT in the very late stages of this purchase and said, "These are our needs and requirements," and it was a fairly intense set of equipment. It was multiple servers, multiple environments, kind of up and down the stack, and they were bringing in outside consultants to help them with their implementation.

The interesting thing was, they had spec'd their statement of work (SOW) with these consultants to not start for the 4 to 6 weeks, because they really believed that's how long it was going to take IT to get them their environments and their hardware, using some of their old understanding of IT’s capabilities.

And reality was that we could provide them their test and developement environments that they needed to start with these consultants within a matter of hours, not weeks, and we were able to do so. I had the pleasure of calling the finance VP and informing him that his environments were ready and they were just probably going to sit idle for the next 4-6 weeks until the consultants actually showed up, which surprised all sorts of people.

Add things later


W
e didn't have all their production capacities, but those are things we could add later. They didn’t need production capacity in the first month of the project anyway. So our ability to have that virtualized infrastructure and be able to rapidly deploy to meet business requirements is one of the really kind of cool things we can do these days.

Gardner: Suzan, you’ve mentioned that as an enabler, not a roadblock. So being able to keep up with the speed of business, I suppose, is the best way to characterize this?

Frye: Absolutely. Going back to SRM, another big win for us was, as we were rolling out on some of our Tier 1 mission-critical applications, it was decided by the business that they wanted to test DR. They were going down the path of doing that the old-fashioned way by backing up databases, restoring databases, and taking weeks to do that, days and weeks.

We said, "We think we have a better way with SRM and our replication technologies. We have that data here. Why don't you let us clone that data and stand it up for you?" Literally, within 10 seconds, they had a replica of their data.

So we were enabling them to do their DR testing with SRM, on demand, when they wanted to do that, as well as giving them the benefit of doing the faster cloning and data refreshes. That was just a day-to-day, operational activity that they had no idea we could do for them.

It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins.



It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins. It's going to specific business units and application owners and saying, "We think we have a better way. What do you think about this?" Once they got their hands on it, just looking at their faces was really a good moment for us.

Gardner: Sure, and of course, as an online retailer, having that dependability that DR provides has to be something that lets you sleep a little better at night.

Frye: Just a little bit.

Gardner: Let's talk a little bit about where you go now. Another thing that I often hear in the market is that the benefits of virtualization are ongoing. It's a journey that keeps providing milestones. It doesn't really end.

Do you have any plans around private cloud perhaps, getting more elasticity and fit-for-purpose benefits out of your implementations? Perhaps you're looking to bring other applications into the fold, or maybe you’ve got some other plans around delivering on business applications at lower cost.

So where do you go next with your virtualization payoff?

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization.

We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road.

There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.



There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.

We were looking at how we can possibly move our data out of the country for a period of time, while the infrastructure was stabilizing, specifically power, and then maybe bring it back when things settle down again.

Unfortunately we weren't quite virtual on the edge yet there, but today we think that's something we could do. Thinking about how and where we move data to be at the right place at the right time is where we think the next big win for us.

Then, we get into the application profiles that users are asking for and their ability to spin up environments very quickly to just test something. It lets us get out of having IT as being the roadblock to innovation. A lot of times the business or part of our innovation teams come up with some idea on a concept, an application, or whatever it is. They don't have to wait for IT to fulfill their needs. The environments are right there for them.

So I challenge the teams routinely to think a little bit differently about how we've done things in the past, because our architecture is dramatically different than it was even two years ago.

Gardner: Well, great. We have to leave it there. We've been talking about how outerwear and sportswear maker, Columbia Sportswear has used virtualization technologies and models to improve their business operations. We’ve also seen how better systems makes for better applications that can deliver better business results.

So I’d like to thank our guests for joining this BriefingsDirect podcast. We have been here with Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear in Portland, Oregon. Thank you so much, Michael.

Leeper: Thank you.

Gardner: And we have been joined by Suzan Frye, Manager of Systems Engineering, also there at Columbia Sportswear. Thanks to you, Suzan.

Frye: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you all audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored BriefingsDirect podcast on how Columbia Sportswear has harnessed virtualization to provide a host of benefits for its business units. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, July 03, 2012

Roundtable: Revlon, SAP and VMware Describe Accretive Benefits from Aggressive Adoption of Cloud Computing

Transcript of a sponsored podcast on how cloud and virtualization deliver benefits in cost, efficiency, and agility.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion focused on two prime examples of organizations that have gleaned huge benefits from high degrees of virtualization and aggressive cloud computing adoption.

We're joined by executives from Revlon and SAP, who recently participated in a VMware-organized media roundtable event in San Francisco. The event, attended by industry analysts and journalists, demonstrated how mission-critical applications supported by advanced virtualization strategies are transforming businesses. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We're going to learn more about the full implications of IT virtualization, and how they're being realized -- from bringing speed to business requests, to enhancing security, to strategic disaster recovery (DR), and to unprecedented agility in creating and exploiting applications and data delivery value.

With that, please join me now in welcoming our guests, David Giambruno, Senior Vice President and CIO of Revlon. Welcome back, David.

David Giambruno: Thanks a lot, Dana.

Gardner: We're also here with Heinz Roggenkemper, Executive Vice President of Development at SAP Labs. Welcome Heinz.

Heinz Roggenkemper: Welcome, Dana.

Gardner: Heinz, let me begin with you, if you don’t mind. Describe for our listeners your internal cloud approach that you've been using to make training and development applications readily available. What's going on with that internal cloud, and why is the speed and agility so important for you?

Roggenkemper: If you look at SAP, you find literally thousands of development systems. You find a lot of training systems. You find systems that support sales activities for pre-sales. You find systems that support our consulting organization in developing customer solutions.

From a developer's perspective, the first order of business is to get access to a system fast. Developers, by themselves, don’t care that much about cost. They want the system and they want it now. For development managers and management in general, it’s a different story.

For training, it's important that the systems are reliable and available. Of course again for management, it's the cost perspective. For people in custom development, they need the right system quickly to build up the correct environment for the particular project that they're working on.

Better supported

A
lso these requirements are much better supported in the virtualized environment than they were before. We can give them the system quickly. We can give them the systems reliably. We can give them the systems with good performance, and from a corporate perspective, do it at a much better cost than we did before.

Our business agility and ability to respond to market drivers is greatly improved by this.

Gardner: One of the things that was intriguing to me was the training instance, where people were coming in and needed a full stack of SAP applications, perhaps third-party applications that were mission critical. Tell me how the training application in particular, or the use of virtualization in that instance, demonstrates some of the more productive aspects of cloud?

Roggenkemper: The most interesting part about that is that you don’t need a vanilla system, but a system that is prepared for a particular class, which has the correct set of data. You need a system that can be reset to a controlled stage very quickly after the end of a training class, so that it’s ready for the next training class.

So there are two aspects to it. One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time.

Gardner: On the issues of control of the data, security, and even licensing, are there unintended consequences or unintended benefits that come when you approach the delivery of these applications through the full virtualization and this cloud model?

One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time



Roggenkemper: For unintended benefits, the thing that comes to my mind is that it allows us to take advantage of new computing infrastructure more quickly. We reduce the use of power, which is always a good thing.

For an unintended downside, the only thing that would come to my mind is that when in development, you are tuning for performance. That is a slightly different thing. In some areas, if you do general tuning, where you run a couple of iterations instead of just running to identify where your hotspots are, and if it’s a highly critical component, you might have to go to dedicated hardware to get to the last few percentages.

So in that area, you have to behave differently, but it affects only a small window of your total development time. Most of the time, you still take full advantage of a virtualized environment. Once you go into tuning, then you move the system to dedicated hardware and do your job there. If you average it out, you still have a substantial advantage.

Gardner: This idea of agility when producing these applications with their full data and production ready, even if you are in a training and development environment, where you're not necessarily facing their customers, proves this concept of IT as a service. Do you see it that way, and if so, is it something that you are going to be bringing to other applications within SAP?

Roggenkemper: Absolutely. And obviously, what we use internally benefits our customers as well. To have these systems available in a much shorter period of time for the customer’s development environment is as important for them as it is for us.

Future plans


Gardner: And a question about future plans. It sounds as if this works for you. Then the virtual desktop infrastructure (VDI) approach of delivering entire client environments with apps, data, and full configuration would be a natural progression. Is that something that you're looking at or perhaps you're already doing?

Roggenkemper: Some things we're already doing, We have a hefty set of terminal servers in our environment, as well, which people, especially if they are on the road or work from home, take full advantage of.

Gardner: David, let’s go to you. I was very interested to hear today your version of IT as a service, really a vision that you painted. I think essentially you're saying that advances in pervasive virtualization and cloud methods are transforming how IT operates, but it’s giving you the ability of, as you said, saying yes when your business leaders come calling. What have you have been able to say yes to that exemplifies this shift in IT?

Giambruno: I can’t equate that to numbers. We've increased our project throughput over the past couple of years by 300. So my job is to say, "yes." I'm just here to help. I'm a service. Services are supposed to deliver. What this cloud ecosystem has delivered for us is our ability to say yes and get more done faster, better, cheaper.

The correlating effect of that is we have seen not only this massive increase in our ability to deliver projects for the business, because that’s really what business alignment is. I do what they want and I give them some counsel along the way.

The second piece is that we've seen a 70 percent reduction in the time it takes us to deliver applications, because we have all of these applications available to us in the task and development site which is part of our DR.

So this ability to move massive amounts of information where everything is just the file, bring that up and let our development teams at it, has added this whole speed, accuracy, and ability to deliver back to the business.

Gardner: So we understood with SAP that they're a very big, global delivery of business applications for all sorts of companies. They have an internal cloud that they're using for some specific training and some specific development activities.

That’s really what business alignment is. I do what they want and I give them some counsel along the way.



But Revlon is also a global company. Tell us a little bit about the role that you have for our listeners who might not be familiar, the extent to which your applications are being used, and the type of mission-critical activities that you're involved in?

Giambruno: It’s probably easier to quantify it this way. We have 531 applications running on our internal cloud. Our internal cloud makes roughly 15,000 automated application moves a month. Our transaction rate is roughly 14,000 transactions a second. Our data change rate is between 17 and 30 terabytes a week. Over 90 percent of our corporate workload sits on our internal cloud, and it runs most of our footprint globally.

Gardner: We're talking about mission-critical apps here -- ERP, manufacturing, warehousing, business intelligence. Did you start with mission-critical apps or did you end up there? How did you progress?

Trust, but verify


Giambruno: I have a couple of "isms" that I live by. The first one is “Crawl, Walk, Run” and the second one is “Trust, but Verify.” When we started our journey roughly five years ago, we started with "Crawl" -- very much "Crawl" and “Trust – but Verify.” At Revlon, we didn’t spend any more to put this in. We changed how we spent our money.

We were going through a server refresh, and instead of buying all the servers, we only bought roughly 20 percent. With the balance of that money, we bought the VMware licenses. We started putting in our storage area network (SAN), and although core component pieces, and we took some of our low-hanging fruit file systems and started moving all that.

As we did that, we started sharing with the business. We showed them what we're doing and that it still worked. Then, we started the walk phase of putting applications on it. We actually ran north of six nines.

System availability went up. Performance went up. And after this "Crawl Walk Run," "Trust and Verify," it became "Just keep Going." We accelerated the whole process and we have these things that we call "fuzzies," things that we can do for the business that they weren't expecting. Every couple of months, we would start delivering new capabilities.

One of the big things that we did was that we internalized all our DR. We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves, because at Revlon I'm not going to be a profit center.

We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves.



For Revlon, the more money R&D has to develop new products to get to our consumers and for marketing to tell that product story and get it out to our channels and use the media to talk about our glamorous products, that really drives growth in Revlon.

What we've done is focused on those things, taking the complexity out, but delivering capability to the business while either avoiding or saving money that that the business can now use to grow.

Gardner: So you've been able to say yes when they come and ask you for new services and capabilities. You've been able to keep your costs at or below the previous levels. That’s pretty impressive. Do you credit that to virtualization, to cloud, to the entire modernization? How do you describe it?

Giambruno: To me it’s the interaction of the entire ecosystem. It is a system. Virtualization is a huge part of that. That’s where all it started. As you look through the transition, it's really been interesting. I'm going to segue back to the saying yes pieces and what it’s allowed us to be.

We have this thing called Oneness. I always talk about being the Southwest [Airlines] of computing, and I live inside of very simple triangle. The triangle has three sides, obviously. One side is our application inventory, the other side is our infrastructure capabilities, and the other side is my skill-sets.

Saying yes

I
f you're inside that space I can say yes, very quickly. What’s happened inside that space helped us contain cost . When we first started work, our ratio was one physical to seven virtual. A couple years later, we're at 1:35. It’s roughly a 500 percent increase in capacity without any commensurate cost. I give credit to my team for owning the technology and for wielding the technology for the benefit of the business and to get the most out of it.

The frame of reference to keep ourselves grounded is that we make lipstick, and it’s really how much money we can save and how well we can wield that technology to deliver value and do more with less. That’ll enable our company to grow.

We love simplicity and we have this Southwest computing model of taking a very complex ecosystem and making it simple to use. To a large degree it's kind of like an iPad, where the business wants to touch it, but they don’t care what’s going on underneath.

It's our job to deliver that, to deliver that experience and capability back to the business, without them having to think about it. I just want them to ask that we’re here to help and that we can figure a way to deliver it and keep exercising our technical capabilities to wield the technology to do more.

Gardner: I'm intrigued by this notion of the ecosystem being a whole greater than the sum of the parts. One of the things that you've been able to do, in addition to saying yes and keep your costs in line, is to improve your data and manage your data lifecycle, according to what I heard today.

It's our job to deliver that experience and capability back to the business, without them having to think about it.



Tell me about this notion you said of all the data becoming structured. What are some of the upsides on the data, when it comes to this ecosystem approach?

Giambruno: When you were talking to Heinz, you talked about unintended consequences. One of the things that we have is a big gestalt after our cloud was live. We literally had all of our data in one place.

One of the big challenges historically was that we had all these applications geographically dispersed. The ability to touch them, feel them, get access, access controls, all of these things were monumentally challenging. In Revlon, as we went to the Southwest or Oneness model, we organized globally our access controls and those little things.

So when we had all this data and all these applications now sitting at one place, with our ability to look at them and understand them, we started a fairly big effort for our master data model. We’re structuring our data on the way in So when we're trying to query the data, we already know where it is and what it does in its relationships, instead of trying to mine through unstructured data and make reasoning out of it. It’s been this big data structure.

I’d say we "chewed glass." We spent a couple of years chewing glass, structuring all this data, because the change rate is so big, but there's value in information to the business. I joke, if you've missed at this, we’re in the information age. So how well we can wield our information and give our leadership team information to act on is a differentiator. The ability to do this big data and this master data model has been really what we see as the golden egg going forward, the thing that can really make a difference with the business.

Gardner: While we’re on this notion of unintended consequences and unintended benefits, does anything along the lines of security or licensing also come to mind?

Self selection


Giambruno: From a licensing perspective, along the journey we called it self-selection. Licensing is important. Everybody has to make money. We live in capitalism. So from a procurement perspective, we always want to make sure we’re legal, but at the same time, vendors will self-select, depending on their licensing model in the virtualization world. That's our triangle. That's our infrastructure. Through that, we’ve had to manage relationships and we’ve done that.

From a security model, the structuring of all of our infrastructure, putting the in the Southwest model of computing, this Oneness, getting our data, our access controls, all of that plus with greatly simplified security, all of that is completely ubiquitous. There were even some of the crazy things that we did --we restructured the IP-ing of everything in Revlon to make all of our IP blocks contiguous. So when we move things around the world inside our cloud, we move entire blocks of IP addresses.

As you look forward, one of the interesting things that I find is that, as you look at streaming our applications, there is a huge security paradigm shift. Essentially no data will ever leave my data center and sit on a device.

In five years, that would be my goal. I think I can do it in 24 months, but really from a horizon, it’s like five years. At that point, I can literally encrypt my data center. Think about PCI and HIPAA and all the controls around that. Encryption is one of those big first checkmarks. If you can do that, you solve a lot of your compliance challenges.

Second, you have this trusted computing model, where I know the person from an access control. I know the device. I know what that person is supposed to have access to. I've encrypted my entire data center, so when that person comes in, I can let them have access only to what they’re supposed to have in the context that they're supposed to have, and decrypt it on the way out. They’re only viewing a device, and no data ever lives on a device.

They’re only viewing a device, and no data ever lives on a device.



So bring your own device. I wouldn’t care, because there's almost no security concerns at that point. I've encrypted. I know the user. Going one step further, as companies progress, you’re going to look at these internal marketplaces that everyone is going to build.

What the iPad has done is make it so I want to turn it on. I want to click on the app that gives me the information to do my job. I want my workflow, my exception management, the information I need to do for the day or do my planning, whatever I need to do. But they want that information in context.

Roll the tape forward a couple of years, and the capabilities that’s coming out on VMware, we fully expect to take care of that, to adopt that model, and that’s what we’re pushing for.

Gardner: It’s fascinating hearing you talking about large-scale virtualization and internal cloud. This has allowed you to have a much better grasp over your costs and deliver your apps and services readily, so that you can say yes to your business users.

In addition, you're getting master data management (MDM) benefits. You’re getting a better handle on licensing. You’re seeing great improvements in security now, and perhaps more to come, as you stream apps to a more virtualized client model.

Symbiotic relationship

Y
ou also mentioned something when it came to disaster recovery (DR) that piqued my interest. It sounds almost as if there is a symbiotic positive relationship between high levels of virtualization and DR. It almost sounds like DR has become the ability to move entire data centers as assets that are fungible, and that that gives you a lot more capability, in addition to being able to recover.

Is that true? Tell me how this DR plays into this larger set of values.

Giambruno: We’ve actually done this. No one was hurt, but last year, our factory in Venezuela burned. It was on a Sunday afternoon and they had what we call a drib. If you look at VMware architecture, they have data center in a box. I always joke that we’re years ahead of them in that. We use dribs, strategically placed throughout the world where we push capacity to for our cloud. They largely run dark.

So our drib "phoned home" that it was getting hot. We were notified that the building was on fire. It took us an hour and 45 minutes, and most of that time was finding one of my global storage guys who was at the beach. We found Ben, and got him to do his part, which was to tell the cloud to move from Venezuela to our disaster site in New Jersey.

So we joke that our model in DR is that we just copy everything. We don’t even think about tiering or anything. It’s this model, sometimes a Casio is just better than a Rolex. Simplicity rules, and not thinking about it ensures that we have all the data available. Again, it goes back to our cloud and virtualization. Everything is just a file. We just copy the deltas all the time. We never stop.

For us it was available in less than 15 minutes. We went in, we broke the synchronization, we made sure everything was up-to-date, and we told our F5s and our info blocks that Venezuela is now New Jersey. Everything swung, we got everything in, we contacted the business units to test everything and verify everything.

It's this whole idea of simplicity, where you're just not putting the complexity into the system.



Then we brought up all the virtual desktops and we used Riverbed mobile devices. We e-mailed their client to everyone. So people either worked from home or we had some very good partners that gave us some office space where people could use the computers. They loaded the Riverbed mobile devices on those computers. They brought the virtual desktops, people went to work, and the business didn’t go away.

Gardner: So you were able to say yes, even when a factory burned to the ground. That's pretty impressive.

Giambruno: This is a real-world example of how you can do it, and it wasn't a lot of effort. It's this whole idea of simplicity, where you're just not putting the complexity into the system. I always go back to this iPad view of the world, where the business just wants to know what's available and we will do the rest underneath.

This high degree of virtualization lets us move all of this data around the world, and it's for DR, development, and a myriad of capabilities that we keep finding new ways to use this capability.

Gardner: I suppose it elevates the concept of fit for purpose to that data-center level?

Redundancy and expense

Giambruno: Correct. And some of the other unintended consequences are interesting. You talk about redundancy and expense. Two is one and one is none in a data center. Do you really need to be fully redundant, because if something happens we'll just switch to the other data center?

I only need one core switch or whatever. You start to challenge all these old precepts of up-time, because it's almost cheaper for me or less-expensive. I can just roll the computer over here for a little while. I get that fixed, if I have a four-hour service-level agreement (SLA) with my vendors for repairs.

You can start to question a lot of the “old ways of doing things” or what was the standard in figuring out new ways to operate. One of the interesting things I love about my job is you can question yourself and figure out what you can do next.

Gardner: One last item that I suppose also fits into this unintended positive consequences issue. You've mentioned something about supply-chain value and getting to the point where you can take your external cloud, push it out to your suppliers and contractors, and begin sharing with permissions and control. This is a much better approach than the old way of virtual private networks (VPNs) and the headaches around access and so forth. So tell me about this extended business-process value that you're starting to explore?

Giambruno: One of the things we realized is that we could start extending our cloud. We spend a lot of time managing security and VPNs, and the audits that have to go around that.

At the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.



If I could just push out a piece of my application or make that available to them, they could update their data, reduce the number of APIs, the number of connections, all of that complexity that goes out there, and extend our MDM.

Then we can interface our MDM through our cloud to do some of this translation for us that they can enter data, or we can take it from their systems, from our cloud edge securely and in context and bring that back into our systems.

We think there are huge possibilities around automating and simplifying. But at the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.

So you're always trying to foster those relationships and get whatever synergies you can. If we make it easier on them to interact with us from a system’s perspective, it just makes everybody happier. We've got some projects slated for deployment this year. Maybe in a year, if you come back, I can tell you how well we’ve done or what we’ve done. But one of the things that we are looking is we can think really change how we operate as a company.

Gardner: That's fascinating. You talked about a lot of efficiency, reducing your footprint on the physical plant, on energy, keeping your cost in line, spinning up more applications and data. But now we are talking about not just efficiencies, but actually doing things entirely differently, things that could not have been done before because of cloud. That to me is really the essence of where we are going to be talking in the next few years.

So, David, thanks so much for your time. We have to leave it there. You've been listening to a sponsored podcast discussion in conjunction with a VMware-organized media roundtable event in San Francisco.

We've been exploring two prime examples of organizations that have gained huge benefits from high degrees of virtualization and aggressive cloud computing adoption with mission-critical applications. The two organizations of course have been Revlon and SAP.

I’d like to thank our guests David Giambruno, Senior Vice President and CIO of Revlon. Thanks so much, David.

Giambruno: My pleasure.

Gardner: We have also been here with Heinz Roggenkemper, Executive Vice President of Development at SAP Labs.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience for joining, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast on how cloud and virtualization deliver benefits in cost, efficiency, and agility. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: