Showing posts with label John Maxwell. Show all posts
Showing posts with label John Maxwell. Show all posts

Thursday, November 29, 2012

New Strategies Needed to Ensure Simpler, More Efficient Data Protection for Complex Enterprise Environments

Transcript of a BriefingsDirect podcast on new solutions to solve the growing need for more reliable and less cumbersome data backups, despite increasingly data-intensive environments.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Dana Gardner
Today, we present a sponsored podcast discussion on enterprise backup, why it’s broken, and how to fix it. We'll examine some major areas where the backup of enterprise information and data protection are fragmented, complex, and inefficient. And then, we'll delve into some new approaches that help simplify the data-protection process, keep costs in check, and improve recovery confidence.

Here to share insights on how data protection became such a mess and how new techniques are being adopted to gain comprehensive and standard control over the data lifecycle is John Maxwell, Vice President of Product Management for Data Protection at Quest Software, now part of Dell. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Welcome back to the show, John.

John Maxwell: Hey, Dana. It’s great to be here.

Gardner: We're also here with George Crump, Founder and Lead Analyst at Storage Switzerland, an analyst firm focused on the storage market. Welcome, George.

George Crump: Thanks for having me.

Gardner: John, let’s start with you. How did we get here? Why has something seemingly as straightforward as backup become so fragmented and disorganized?

Maxwell: Dana, I think it’s a perfect storm, to use an overused cliché. If you look back 20 years ago, we had heterogeneous environments, but they were much simpler. There were NetWare and UNIX, and there was this new thing called Windows. Virtualization didn’t even really exist. We backed up data to tape, and a lot of data was in terabytes, not petabytes.

Flash forward to 2012, and there’s more heterogeneity than ever. You have stalwart databases like Microsoft SQL Server and Oracle, but then you have new apps being built on MySQL. You now have virtualization, and, in fact, we're at the point this year where we're surpassing the 50 percent mark on the number of servers worldwide that are virtualized.
John Maxwell

Now we're even starting to see people running multiple hypervisors, so it’s not even just one virtualization platform anymore, either. So the environment has gotten bigger, much bigger than we ever thought it could or would. We have numerous customers today that have data measured in petabytes, and we have a lot more applications to deal with.

And last, but not least, we now have more data that’s deemed mission critical, and by mission critical, I mean data that has to be recovered in less than an hour. Surveys 10 years ago showed that in a typical IT environment, 10 percent of the data was mission critical. Today, surveys show that it’s 50 percent and more.

Gardner: George, did John leave anything out? From your perspective, why is it different now?

Crump: A couple of things. I would dovetail into what he just mentioned about mission criticality. There are definitely more platforms, and that’s a challenge, but the expectation of the user is just higher. The term I use for it is IT is getting "Facebooked."

High expectations

I've had many IT guys say to me, "One of the common responses I get from my users is, 'My Facebook account is never down.'" So there is this really high expectation on availability, returning data, and things of that nature that probably isn’t really fair, but it’s reality.

One of the reasons that more data is getting classified as mission critical is just that the expectation that everything will be around forever is much higher.

George Crump
The other thing that we forget sometimes is that the backup process, especially a network backup, probably unlike any other, stresses every single component in the infrastructure. You're pulling data off of a local storage device on a server, it’s going through that server CPU and memory, it’s going down a network card, down a network cable, to a switch, to another card, into some sort of storage device, be it disk or tape.

So there are 15 things that happen in a backup and all 15 things have to go flawlessly. If one thing is broken, the backup fails, and, of course, it’s the IT guy’s fault. It’s just a complex environment, and I don’t know of another process that pushes on all aspects of the environment in one fell swoop like backup does.

Gardner: So the stakes are higher, the expectations are higher, the scale and volume and heterogeneity are all increased. What does this mean, John, for those that are tasked with managing this, or trying to get a handle on it as a process, rather than a technology-by-technology approach, really looking at this at that life cycle? Has this now gone from being a technical problem to a management or process problem?

Maxwell: It's both, because there are two issues here. One, you expect today's storage administrator, or sysadmin, to be a database administrator (DBA), a VMware administrator, a UNIX sysadmin, and a Windows admin. That’s a lot of responsibility, but that’s the fact.

A lot of people think that they are going to have as deep level of knowledge on how to recover a Windows server as they would an Oracle database. That’s just not the case, and it's the same thing from a product perspective, from a technology perspective.
Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not.

Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not, because being the best of everything means different things to different accounts. It means one thing for the small to medium-size business (SMB), and it could mean something altogether different for the enterprise.

We've now gotten into a situation where we have the typical IT environment using multiple backup products that, in most cases, have nothing in common. They have a lot of hands in the pot trying to manage data protection and restore data, and it has become a tangled mess.

Gardner: Before we dive a little bit deeper into some of these major areas, I'd like to just visit another issue that’s very top of mind for many organizations, and that’s security, compliance, and business continuity types of issues, risk mitigation issues. George Crump, how important is that to consider, when you look at taking more of a comprehensive or a holistic view of this backup and data-protection issue?

Disclosure laws

Crump: It's a really critical issue, and there are two ramifications. Probably the one that strikes fear in the heart of every CEO on the planet is all the disclosure laws that exist now that say that, when you lose a customer’s data, you have to let him know. Unfortunately, probably the only effective way to do that is to let everybody know.

I'm sure everybody listening to this podcast has gotten more than one letter already this year saying their Social Security number has been exposed, things like that. I can think of three or four I've already gotten this year.

So there is the downside of legally having to admit you made a mistake, and then there is the legal requirements of retaining information in case of a lawsuit. The traditional thing was that if I got a discovery motion filed against me, I needed to be able to pull this information back, and that was one motivator. But the bigger motivator is having to disclose that we did lose data.

And there's a new one coming in. We're hearing about big data, analytics, and things like that. All of that is based on being able to access old information in some form, pull it back from something, and be able to analyze it.

That is leading many, many organizations to not delete anything. If you don't delete anything, how do you store it? A disk-only type of solution forever, as an example, is a pretty expensive solution. I know disk has gotten a lot cheaper, but forever, that’s a really long time to keep the lights on, so to speak.
We need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

Gardner: Let's look at this a bit more from the problem-solution perspective. John, you've gotten a little bit into this notion that we have multiple platforms, we have operating systems, hypervisors, application types, even appliances. What's the problem here and how do we start to develop a solution approach to it?

Maxwell: The problem is we need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

But the biggest thing we have to address is, with the amount and complexity of the data, how can we make sysadmins, storage administrators, and DBAs productive, and how can we get them all on the same page? Why do each one of these roles in IT have to use different products?

George and I were talking earlier. One of the things that he brought up was that in a lot of companies, data is getting backed up over and over by the DBA, the VMware administrator, and the storage administrator, which is really inefficient. We have to look at a holistic approach, and that may not be one-size-fits-all. It may be choosing the right solutions, yet providing a centered means for administration, reporting, monitoring, etc.

Gardner: George, you've been around for a while in this business, as have I, and there is a little bit of a déjà vu here, where we're bringing a system-of-record approach to a set of disparate technologies that were, at one time, best of breed and necessary, but are increasingly part of a more solution or process benefit.

So we understand the maturation process, but is there anything different and specific about backup that makes this even harder to move from that point solution, best of breed mentality, into more of a comprehensive process standardization approach?

Demands and requirements

Crump: It really ties into what John said. Every line of business is going to have its own demands and requirements. To expect not even a backup administrator, but an Oracle administrator that’s managing an Oracle database for a line of business, to understand the nuances of that business and how they want to keep things is a lot to ask.

To tie into what John said, when backup is broken, the default survival mechanism is to throw everything out, buy the latest enterprise solution, put the stake in the ground, and force everybody to centralize on that one item. That works to a degree, but in every project we've been involved with, there are always three or four exceptions. That means it really didn’t work. You didn't really centralize.

Then there are covert operations of backups happening, where people are backing up data and not telling anybody, because they still don't trust the enterprise application. Eventually, something new comes out. The most immediate example is virtualization, which spawned the birth of several different virtualized specific applications. So bringing all that back in again becomes very difficult.

I agree with John. What you need to do is give the users the tools they want. Users are too sophisticated now for you to say, "This is where we are going to back it up and you've got to live with it." They're just not going to put up with that anymore. It won't work.

So give them the tools that they want. Centralize the process, but not the actual software. I think that's really the way to go.

Gardner: So we recognize that one size fits all probably isn’t going to apply here. We're going to have multiple point solutions. That means integration at some level or multiple levels. That brings us to our next major topic. How do we integrate well without compounding the complexity and the problems set? John?
We’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

Maxwell: We've been working on this now for almost two years here at Quest, and now at Dell, and we are launching in November, something called NetVault XA. “XA” stands for Extended Architecture. We have a portfolio of very rich products that span the SMBs and the enterprise, with focus on virtual backup, heterogeneous backup, instantaneous snapshots and deep application recovery, and we’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

NetVault XA solves some really big issues. First of all, it unifies the user experience across products, and by user, I mean the sysadmin, the DBA, and the storage administrator, across products. The initial release of NetVault XA will support both our vRanger and NetVault Backup, as well as our NetVault SmartDisk product, and next year, we'll be adding even more of our products under NetVault XA as well.

So now we've provided a common means of administration. We have one UI. You don’t have to learn something different. Everyone can work on the same product, yet based on your login ID, you will have access to different things, whether it's data or capabilities, such as restoring an Oracle or SQL Server database, or restoring a virtual machine (VM).

That's a common UI. A lot of vendors right now have a lot of solutions, but they look like they're from three, four, or five different companies. We want to provide a singular user experience, but that's just really the icing on the cake with NetVault XA.

If we go down a little deeper into NetVault XA, once it’s is installed, learning alongside vRanger, NetVault, or both, it's going to self identify that vRanger or NetVault environment, and it's going to allow you to manage it the way that you have already set about from that ability.

New approach

We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

You gave an example earlier of Oracle. Oracle is not an application. Oracle is a platform for applications, and sometimes applications span databases, file systems, and multiple servers. You need to be looking at that from a holistic level, meaning what makes up application A, what makes up application B, C, D, etc.?

Then, what are the service levels for those applications? How mission critical are they? Are they in that 50 percent of data that we've seen from surveys, or are they data that we restored from a week ago? It wouldn’t matter, but then, again, it's having one tool that everyone can use. So you now have a whole different user experience and you're taking up a whole different approach to data protection.

Gardner: This is really interesting. I've seen a demo of this and I was very impressed. One of the things that jumped out at me was the fact that you're not just throwing a GUI overlay on a variety of products and calling it integration.

There really seems to be a drilling down into these technologies and surfacing information to such a degree that it strikes me as similar to what IT service management (ITSM) did for managing IT systems at a higher level. We're now bringing that to a discrete portion backup and recovery. Does that sound about right, George, or did I overstate it?
We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

Crump: No, that's dead-on. The benefits of that type of architecture are going to be substantial. Imagine if you are the vRanger programmer, when all this started. Instead of having to write half of the backend, you could just plug into a framework that already existed and then focus most of your attention on the particular application or environment that you are going to protect.

You can be releasing the equivalent of vRanger 6 on vRanger 1, because you wouldn’t have to go write this backend that already existed. Also, if you think about it, you end up with a much more reliable software product, because now you're building on a library class that will have been well tested and proven.

Say you want to implement deduplication in a new version of the product or a new product. Instead of having to rewrite your own deduplication engine, just leverage the engine that's already there.

Gardner: John, it sounds a little bit like we're getting the best of both worlds, that is to say the ability to support a lot of point solutions, allowing the tools that the particular overseer of that technology wants to use, but bringing this now into the realm of policy.

It's something you can apply rules to, that you can bring into concert with other IT management approaches or tasks, and then gain better visibility into what is actually going on and then tweak. So amplify for me why this is standardization, but not at the cost of losing that Swiss Army knife approach to the right tool for the right problem?

One common means

Maxwell: First of all, by having one common means, whether you're a DBA, a sysadmin, a VMware administrator, or a storage administrator, this way you are all on the same page. You can have people all buying into one way of doing things, so we don't have this data being backed up two or three times.

But the other thing that you get, and this is a big issue now, is protecting multiple sites. When we talk about multiple sites, people sometimes say, "You mean multiple data centers. What about all those remote office branch offices?" That right now is a big issue that we see customers running into.

The beauty of NetVault XA is I can now have various solutions implemented, whether it's vRanger running remotely or NetVault in a branch office, and I can be managing it. I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly. It could be halfway around the country or halfway around the world, and this way we have consistency.

Speaking of reporting, as you said earlier, what about a dashboard for management? One of our early users of NetVault XA is a large multinational company with 18 data centers and 250,000 servers. They have had to dedicate people to write service-level reports for their backups. Now, with NetVault XA, they can literally give their IT management, meaning their CIO and their CTOs, login IDs to NetVault XA, and they can see a dashboard that’s been color coded.

It can say, "Well, everything is green, so everything is protected," whether it's the Linux servers, Oracle databases, Exchange email, whatever the case. So by being able to reduce that level of complexity into a single pane of glass -- I know it's a cliché, but it really is -- it's really very powerful for large organizations and small.
I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly.

Even if you have two or three locations and you're only 500 employees, wouldn’t it be nice to have the ability to look at your backups, your replicas, and your snapshots, whether they're in the data center or in branch offices, and whether you're a sysadmin, DBA, storage administrator, to be using one common interface and one common set of rules to all basically all get on the same plane?

Gardner: Let's revisit the issue that George was talking about, eDiscovery, making sure that nothing falls through the cracks, because with Murphy’s Law rampant, that's going to be the thing that somebody is going to do eDiscovery on. It seems to me you're gaining some confidence, some sense of guarantees, that whatever service-level agreements (SLAs) and compliance regulatory issues are there, you can start to check these off and gain some automated assurance.

Help me better understand John why the NetVault XA has, for lack of a better word, some sort of a confidence benefit to it?

Maxwell: Well, the thing is that not only have we built things into NetVault XA, where it's going to do auto discovery of how you have vRanger and NetVault set up and other products down the road, but it's going to give you some visibility into your environment, like how many VMs are out there? Are all those VMs getting protected?

I was just at VMworld Barcelona a couple of weeks ago, and VMware has made it incredibly simple now to provision VMs and the associated storage. You've got people powering up and powering down VMs at will. How do you know that you're protecting them?

Dispersed operations

Also at an event this week in Europe, I ran into a user in an emerging country in Eastern Europe, and they have over 1,000 servers, most of which are not being protected. It's a very dispersed operation, and people can implement servers here and there, and they don't know what half the stuff is.

So it's having a means to take an inventory and ensure that the servers are being maintained, that everything is being protected, because next to your employees, your data is the most important asset that you have.

Data is everywhere now. It’s in mobile devices. It certainly could be in cloud-based apps. That's one of the things that we didn’t talk about. At Quest we use seven software-as-a-service (SaaS)-based applications, meaning they're big parts, whether it's or our helpdesk systems, or even Office 365. This is mission-critical corporate data that doesn’t run in our own data center. How am I protecting that? Am I even cognizant of it?

The cloud has made things even more interesting, just as virtualization has made it more interesting over the past couple of years. With NetVault XA, we give you that one single pane of glass with which you can report, analyze, and manage all of your data.

Gardner: Do we have any instances where we have had users, beta customers perhaps, putting this to use, and do we have any metrics of success? What are they getting from it? It's great to have confidence, it's great to have a single view, but are they reducing expenses? Do they have a real measurement of how their complexity has been reduced? What are the tangibles, John?
Now, this person can focus on ensuring that operating systems are maintained, working with end users.

Maxwell: Well, one of the tangibles is the example of the customer that has 18 data centers, because they have a finite-sized group that manage the backups. That team is not going to grow. So if they have to have two or three people in that team just working on writing reports, going out and looking manually at data, and creating their own custom reports, that's not a good use of their time.

Now, those people can do things that they should be doing, which is going out and making sure that data is being protected, going out and testing disaster recovery (DR) plans, and so forth. Some people were tasked with jobs that aren’t very much fun, and that’s now all been automated.

Now they can get down to brass tacks, which is ensuring that, for an enterprise with a quarter million servers, everything is protected and it's protected the way that people think they are going to be protected, meaning the service levels they have in place can be met.

We also have to remember that NetVault XA brings many benefits to our Ranger customer base as well. We have accounts with maybe one home office and maybe two or three remote labs or remote sales offices. We've talked to a couple of vRanger customers who now implement vRanger remotely. In these shops, there is no storage administrator. It's the sysadmin, the VMware administrator, or the Windows administrator. So they didn’t have the luxury like the big accounts to have people do that.

Now, this person can focus on ensuring that operating systems are maintained, working with end users. A lot of the tasks they were previously forced to do took up a lot of their time. Now, with NetVault XA, they can very quickly look at everything, give that health check that everything is okay, and control multiple locations of vRanger from one central console.

Mobile devices

Gardner: Just to be clear John, this console is something you can view as a web interface, and I'm assuming therefore also through mobile devices. I'm going to guess that at some point, there will perhaps be even a more native application for some of the prominent mobile platforms.

Maxwell: It’s funny that you mentioned that. This is an HTML5-based application. So it's very new, very fresh, and very graphical. If you look at the UI, it was designed with tablets and laptops in mind. It's gotten to where you can do controls with your thumbs, assuming you're running this on a tablet.

In-house, and with early support customers, you can log into this remotely via laptops, or tablet computing. We even have some people using them on mobile phones, even though we're not quite there yet. I'm talking about the form factor of how the screens light up, but we will definitely be going that way. So a sysadmin or storage administrator can have at their fingertips the status of what’s going on in the data-protection environment.

What's nice is because this is a thin client, a web UI, you can define user IDs not only for the sysadmins and DBAs and storage administrators, but like I said earlier, IT management.

So if your boss, or your boss’ boss, wants to dial in and see the health of things, how much data you’re protecting, how much data is being replicated, what data is being protected up in the cloud, which is on-prem, all of that sort of stuff, they can now have a dashboard approach to seeing it all. That’s going to make everyone more productive, and it's going to give them a better sense that this data is being protected, and they can sleep at night.
If you don’t have a way to manage and see all of your data protection assets, it's really just a lot of talk.

Gardner: George, we spoke earlier about these natural waves of maturation that have occurred throughout the history of IT. As you look at the landscape for data protection, backup, or storage, how impactful is this in that general maturation process? Is Quest, with its NetVault XA, taking a baby step here, or is this something that gets us a bit more into a fuller, mature outcome, when it comes to the process of data lifecycle?

Crump: Actually, it does two things. Number one, from the process perspective, it allows there to actually be a process. It's nice to talk about backup process and have a process for protection and a process to recover, but if you don’t have a way to manage and see all of your data protection assets, it's really just a lot of talk.

You can't run a process like we are talking about in today’s data center with virtualization and things like that off of an Excel spreadsheet. It's just not going to work. It's nowhere near dynamic enough. So number one, it enables the fact of having a conversation about process.

Number two, it brings flexibility. Because the only other way you could have had that conversation about process, as I said before, would be to throw everything out, pick one application, and suffer the consequences, which would be not ideal support for every single platform.

To sum it up, it's really an enabler to creating a real data-protection process or workflow.

Gardner: Okay. We're going to have to wrap it up pretty soon, but we've mentioned mobile access, and cloud. I wonder if there's anything else coming down the trend pike, if you will, that will make this even more important.

The economy

I come back to our economy. We're still not growing as fast as many people would like, and therefore companies are not just able to grow their top line. They have to look to increase their bottom line through efficiency and deduplication, finding redundancy, cutting down on storage, cutting down energy cost, simplifying, or centralizing data centers into a larger but more efficient and therefore fewer facilities, etc.

Is there anything here, and I will open this up to both John and George, that we can look to in the future that strikes some of these issues around efficiency and productivity, or perhaps there are other trends that will make having a process approach to a data lifecycle and backup and recovery even more important?

Maxwell: Dana, you hit on something that's really near and dear to my heart, which is data deduplication. We have a very broad strategy. We offer our own software-based dedupe. We support every major hardware based dedupe appliance out there, and we're now adding support for Dell’s DR Series, DR4000 dedupe appliances. But we're still very much committed to tape, and we're building initiatives based on storing data in the cloud and backing up, replicating, failover, and so forth.

One of the things that we built into NetVault XA that's separate from the policy management and online monitoring is that we now have historical data. This is going to give you the ability to do some capacity management and capacity planning and see what the utilization is.

How much storage are your backups taking? What's the most optimum number of generations? Where are you keeping that data? Is some data being kept too long? Is some data not being kept long enough?
For every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with.

By offering a broad strategy that says we support a plethora of backup targets, whether it's tape, special-purpose backup appliances, software-based dedupe, or even the cloud, we're giving customers flexibility, because they have unique needs and they have different needs, based on service levels or budgets. We want to make them flexible, because, going back to our original discussion, one size doesn’t fit all.

Gardner: I think we can sum that up as just being more intelligent, being more empowered, and having the visibility into your data. Anything else, George, that we should consider as we think about the future, when it comes to these issues on backup and recovery and data integrity?

Crump: Just to tie in with what John said, we need flexibility that doesn’t add complexity. Almost everything we've done so far in the environment up to now, has added flexibility, but also, for every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with. So that's really the key thing.

Looking forward, at least on the horizon, I don't see a big shift, something like virtualization that we need to be overly concerned with. What I do see is the virtual environment becoming more and more challenging, as we stack more and more VMs on it. The amount of I/O and the amount of data protection process that will surround every host is going to continue to increase. So the time is now to really get the bull by the horns and institute a process that will scale with the business long-term.

Gardner: Well, great. We've been enjoying a conversation, and you have been listening to a sponsored BriefingsDirect podcast on new approaches that help simplify the data-protection process and help keep cost in check, while also improving recovery confidence. We've seen how solving data protection complexity and availability can greatly help enterprises gain a comprehensive and standardized control approach to their data and that data’s lifecycle.

So I would like to thank our guests, John Maxwell, Vice President of Product Management for Data Protection at Quest. Thanks, John.

Maxwell: Thank you, Dana.

Gardner: And also George Crump, Lead Analyst at Storage Switzerland. Thank you, George.

Crump: Thanks for having me.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you, our audience, for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Quest Software.

Transcript of a BriefingsDirect podcast on new solutions to solve the growing need for more reliable and less cumbersome data backups, despite increasingly data-intensive environments. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: 

Tuesday, August 21, 2012

New Levels of Automation and Precision Needed to Optimize Backup and Recovery in Virtualized Environments

Transcript of a BriefingsDirect podcast on the relationship between increased virtualization and the need for data backup and recovery.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the relationship between increasingly higher levels of virtualization and the need for new data backup and recovery strategies.

We'll examine how the era of major portions of servers now being virtualized, has provided an on-ramp to attaining data lifecycle benefits and efficiencies. And at the same time, these advances are helping to manage complex data environments that consist of both physical and virtual systems.

What's more, the elevation of data to the lifecycle efficiency level is also forcing a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle across all applications and uses.

This is different from the previous and current system where it’s often a fragmented approach, with different oversight for data across far-flung instances and uses.

Lastly, our discussion focuses on bringing new levels of automation and precision to the task of solving data complexity, and of making always-attainable data the most powerful asset that IT can deliver to the business.

Here to share insights on where the data availability market is going and how new techniques are being adopted to make the value of data ever greater, we're joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. Welcome back, John. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

John Maxwell: Hi, Dana. Thanks. It’s great to be here to talk on a subject that's near and dear to my heart.

Gardner: Let’s start at a high level. Why have virtualization and server virtualization become a catalyst to data modernization? Is this an unintended development or is this something that’s a natural evolution?

Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, 5 or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.

Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.

Upped the ante

f you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.

It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.

There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage just as they've done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.

From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn't exist in a physical environment.

It’s really good news overall. Again, the hypervisor vendors are focusing on storage and so are companies like Quest, when it comes to protecting that data.

Gardner: As we move towards that mixed environment, what is it about data that, at a high level, people need to think differently about? Is there a shift in the concept of data, when we move to virtualization at this level?

First of all, people shouldn’t get too complacent.

Maxwell: First of all, people shouldn’t get too complacent. We've seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That's why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they're laid out on the physical disks.

And even hypervisor vendors -- I'm going to point back to Microsoft with Windows Server 2012 -- are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.

There's a lot of education going on. The hypervisor vendors, and certainly vendors like Quest, are stepping up to help IT understand how these logical virtual disks are laid out and how to best utilize them.

Gardner: It’s coming around to the notion that when you set up your data and storage, you need to think not just for the moment for the application demands, but how that data is going to be utilized, backed up, recovered, and made available. Do you think that there's a larger mentality that needs to go into data earlier on and by individuals who hadn’t been tasked with that sort of thought before?

See it both ways

Maxwell: I can see it both ways. At face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.

From that aspect, I don't think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.

That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.

For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.

Gardner: I understand that you've conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.

Our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.

Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.

That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.

This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.

I remember in the days of the mainframe, you'd say, "Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it." Today, people expect everything to be back in minutes or seconds.

The other thing that was interesting from the survey is that one-third of IT departments were approached by their management in the past 12 months to increase the speed of the recovery time. That really dovetails with the 50 percent of data being mission critical. So there's pressure on the IT staff now to deliver better service-level agreements (SLAs) within their company with respect to recovering data.

Terms are synonymous

The other thing that's interesting is that data protection and the term backup are synonymous. It's funny. We always talk about backup, but we don't necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.

Case in point: 73 percent of respondents, or roughly three quarters, now consider recovering lost or corrupted data and restoring those mission critical applications their top data-protection concern. Only 4 percent consider the backup window the top concern. Ten years ago, all we talked about was backup windows and speed of backup. Now, only 4 percent considered backup itself, or the backup window, their top concern.

So 73 percent are concerned about the recovery window, only 4 percent about the backup window, and only 23 percent consider the ability to recover data independent of the application their top concerns.

Those trends really show that there is a need. The beauty is that, in my opinion, we can get those service levels tighter in virtualized environments easier than we can in physical environments.

Gardner: We seem to have these large shifts in the market, one around virtualization of servers and storage and the implications of first mixed, and then perhaps a majority, or vast majority, of virtualized environments.

A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.

The second shift is the heightened requirements around higher levels of mission-critical allocation or designation for the data and then the need for much greater speed in recovering it.

Let's unpack that a little bit. How do these fit together? What's the relationship between moving towards higher levels of virtualization and being able to perhaps deliver on these requirements, and maybe even doing it with some economic benefit?

Maxwell: You have to look at a concept that we call tiered recovery. That's driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.

That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it's a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.

For example, it's really easy to say, "I'm going to mirror 100 percent of my data," or "I'm going to do synchronous replication of my data," but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.

Categorize your data

What you have to do is understand and categorize your data, and that's one of the focuses of Quest. We're introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it's replication, continuous data protection, traditional backup, snapshots, or a combination.

You can't just do this blindly. You have got to understand what your data is. IT has to understand the business, and what's critical, and choose the right solution for it.

Gardner: It's interesting to me that if we're looking at data and trying to present policies on it, based on its importance, these policies are going to be probably dynamic and perhaps the requirements for the data will be shifting as well. This gets to that area I mentioned earlier about the culture around data, thinking about it differently, perhaps changing who is responsible and how.

So when we move to this level of meeting our requirements that are increasing, dealing in the virtualization arena, when we need to now think of data in perhaps that dynamic fluid sense of importance and then applying fit-for-purpose levels of support, backup, recoverability, and so forth, whose job is that? How does that impact how the culture of data has been and maybe give us some hints of what it should be?

Maxwell: You've pointed out something very interesting, especially in the area of virtualization, just as we have noticed over the seven years of our vRanger product, which invented the backup market for virtualized environments.

What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage.

It used to be, and it still is in some cases, that the virtual environment was protected by the person, usually the sys admin, who was responsible for, in the case of VMware, the ESXi hypervisors. They may not necessarily have been aligned with the storage management team within IT that was responsible for all storage and more traditional backups.

What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage. So it's not this thing that’s sitting over on the side and someone else does it. As I said earlier, virtualization is now such a large part of all the data, that now it's moving from being a niche to something that’s mainstream. Those people now are going to put more discipline on the virtual data, just as they did the physical.

Because of the mission criticality of data, they're going from being people who looked at data as just a bunch of volumes or arrays, logical unit numbers (LUNs), to "these are the applications and this is the service level associated with the applications."

When they go to set up policies, they are not just thinking of, "I'm backing up a server" or "I'm backing up disk arrays,", but rather, "I'm backing up Oracle Financials," "I'm backing up SAP," or "I'm backing up some in-house human resources application."

Adjust the policy

And the beauty of where Quest is going is, what if those rules change? Instead of having to remember all the different disk arrays and servers that are associated with that, say the Oracle Financials, I can go in and adjust the policy that's associated with all of that data that makes up Oracle Financials. I can fine-tune how I am going to protect that and the recoverability of the data.

Gardner: That to me brings up the issue about ease of use, administration, interfaces, making these tools something that can be used by more people or a different type of person. How do we look at this shift and think about extending that policy-driven and dynamic environment at the practical level of use?

Maxwell: It's interesting that you bring that up too, because we've had many discussions about that here at Quest. I don't want to use the term consumerization of IT, because it has been used almost too much, but what we're looking at is, with the increased amount of virtual data out there, which just adds to the whole pot of heterogeneous environments, whether you have Windows and Linux, MySQL, Oracle, or Exchange, it's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.

We want to make it as easy to back up and recover a database as it is a flat file. The fine line that we walk is that we don't want to dumb the product down. We want to provide intuitive GUIs, a user experience that is a couple of clicks away to say, "Here is a database associated with the application. What point do I want to recover to?" and recover it.

If there needs to be some more hands-on or more complicated things that need to be done, we can expose features to maybe the database administrator (DBA), who can then use the product to do more complex recovery or something to that effect.

It's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.

We've got to make it easy for this generalist, no matter what hypervisor -- Hyper-V or VMware, a combination of both, or even KVM or Xen -- which database, which operating system, or which platform.

Again, they're responsible for everything. They're setting the policies, and they shouldn't have to be qualified. They shouldn't have to be an Exchange administrator, an Oracle DBA, or a Linux systems administrator to be able to recover this data.

We're going to do that in a nice pretty package. Today, there are many people here at Quest who walk around with a tablet PC as much as they do with their laptop. So our next-generation user interface (UI) around NetVault XA is being designed with a tablet computing scenario, where you can swipe data, and your toolbar is on the left and right, as if you are holding it using your thumb -- that type of thing.

Gardner: So, it's more access when it comes to the endpoint, and as we move towards supporting more of these point applications and data types with automation and a policy-driven approach or an architecture, that also says to me that we are elevating this to the strategic level. We're looking at data protection as a concept holistically, not point by point, not source by source and so forth.

Again, it seems that we have these forces in the market, virtualization, the need for faster recovery times, dealing with larger sets of data. That’s pushing us, whether we want to or even are aware of it, towards this level of a holistic or strategic approach to data.

Let me just see if you have any examples, at this point, of companies that are doing this and what it's doing for them. How are they enjoying the benefits of elevating this to that strategic or architecture level?

Exabyte of data

Maxwell: We have one customer, and I won't mention their name, but they are one of the top five web properties in the world, and they have an exabyte of data. Their incremental backups are almost 500 petabytes, and they have an SLA with management that says 96 percent of backups will run well, because they have so much data that changes in a week’s time.

You can't miss a backup, because that gets to the recoverability of the application. They're using our NetVault product to back up that data, using both traditional methods and integrated snapshots. Snapshot was on the technology tier as far as having tiered recovery scenario. They used NetVault in conjunction with hardware snapshots, where there is no backup window. The backup to the application is, for all practical purposes, instantaneous.

Then, they use NetVault to manage and even take that data that’s on disk and eventually move it to tape. The snapshots allow them to do that very quickly for massive amounts of data. And by massive amounts of data, I'm talking 100 million files associated with one application. To put that back in place at any point in time very quickly with NetVault orchestrating that hardware snapshot technology, that’s pretty mind blowing.

Gardner: That does give us a sense of the scale and complexity and how it's being managed and delivered.

You mentioned how Quest is moving towards policy-driven approaches, improving UIs, and extending those UIs to mobile tier. Are there any other technology approaches that Quest is involved with that further explain how some of these challenges can be met? I'm very interested in agentless, and I'm also looking at how that automation gets extended across more of these environments.

We're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.

Maxwell: There are two things I want to mention. Today, Quest protects VMware and Microsoft Hyper-V environments, and we'll be expanding the hypervisors that we're supporting over the next 12 months. Certainly, there are going to be a lot of changes around Windows Server 2012 or Hyper-V, where Microsoft has certainly made it a lot more robust.

There are a lot more things for us exploit, because we're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.

We want to take care of that, mask some complexity and allow people to possibly have cross-hypervisor recoverability. So, in other words, we want to enable safe failover of a VMware ESXi system to Microsoft Hyper-V, or vice versa..

There's another thing that’s interesting and is a challenge for us and it's something that has challenged engineers here at Quest. This gets into the concepts of how you back up or protect data differently in virtual environments. Our vRanger product is the market leader with more than 40,000 customers, and it’s completely agentless.

As we have evolved the product over the past seven years, we've had three generations of the product and have exploited various APIs. But with vRanger, we've now gone to what is called a virtual appliance architecture. We have a vRanger service that performs backup and replication for one or hundreds of VMs that exist either on that one physical server or in a virtual cluster. So this VM can even protect VMs that exist on other hardware.


The beauty of this is first the scalability. I have one software app that’s running that’s highly controllable. You can control what resources are replicating, protecting, and recovering all of my VMs. So that’s easy to manage, versus having to have an agent installed in every one of those VMs.

Two, there's no overhead. The VMs don’t even know, in most cases, that a backup is occurring. We use the services, in the case of VMware, of ESXi, that allows us to go out there, snapshot the virtual volumes called VMDKs, and back up or replicate the data.

Now, there is one thing that we do that’s different than some others. Some vendors do this and some don’t, and I think one of those things you have to look at when you choose a virtual backup or virtual data protection vendor is their technical prowess in this area. If you're backing up a VM that has an application such as Exchange or SharePoint, that’s a live application, and you want to be able to synchronize the hypervisor snapshot with the application that’s running.

There’s a service in Windows called Volume Shadow Copy Service, or VSS for short, and one of the unique things that Quest does with our backup software is synchronize the virtual snapshot of the virtual disks with the application of VSS, so we have a consistent point-in-time backup.

To communicate, we dynamically inject binaries into the VM that do the process and then remove themselves. So, for a very short time, there's something running in that VM, but then it's gone, and that allows us to have consistent backup.

One of the beauties of virtualization is that I can move data without the application being conscious of it happening.

That way, from that one image backup that we've done, I can restore an entire VM, individual files, or in the case of Microsoft Exchange or Microsoft SharePoint, I can recover a mailbox, an item, or a document out of SharePoint.

Gardner: So the more application-aware the solution is, it seems the more ease there is in having this granular level of restore choices. So that's fit for purpose, when it comes to deciding what level of backup and recovery and support for the data lifecycle is required.

This also will be able to fit into some larger trends around moving a data center to a software level or capability. Any thoughts of how what you're doing at Quest fits into this larger data-center trend. It seems to me that it’s at the leading or cutting edge?

Maxwell: One of the beauties of virtualization is that I can move data without the application being conscious of it happening. There's a utility, for example, within VMware called vMotion Storage that allows them to move data from A to B. It's a very easy way to migrate off of an older disk array to a new one, and you never have to bring the app down. It's all software driven within the hypervisor, and it's a lot of control. Basically it’s a seamless process.

What this opens up, though, is the ability for what we're looking at doing at Quest. If there's a means to move data around, why can't I then create an environment where I could do DR, whether it's within the data center for hardware redundancy or whether it's like what we do here at Quest.

Replicate data

e replicate data amongst various Quest facilities. Then, we can bring up an application that was running in location A in point B, on unlike hardware. It can be completely different storage, completely different servers, but since they're VMs, it doesn’t matter.

That kind of flexibility that virtualization brings is going to give every IT organization in the world the type of failover capabilities that used to only exist for the Global 1000, where they used to have to set up a hot site or had to have a data center. They would use very expensive proprietary hardware-based replication and things like that. So you had to have like arrays, like servers, and all that, just to have availability.

Now, with virtualization, it doesn’t matter, and of course, we have plenty of bandwidth, especially here in the United States. So it’s very economical, and this gets back to our survey that showed that for IT organizations, 73 percent were concerned about recovering data, and that’s not just recovering a file or a database.

Here in California, we're always talking about the big one. Well, when the big one happens, whole bunches of server racks may fall over. In the case of Quest, we want to be able to bring those applications up in an environment that's in a different part of the country, with no fault zones and that type of thing, so we can continue our business.

Gardner: We just saw a recent example of unintended or unexpected circumstances with the Mid-Atlantic states and some severe thunderstorms, which caused some significant disruption. So we always need to be thoughtful about the unexpected.

Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud.

Another thing that occurred to me while you were discussing these sort of futuristic scenarios, which I am imagining aren’t that far off, is the impact that cloud computing another big trend in the market, is bringing to the table.

It seems to me that bringing some of the cloud models, cloud providers, service models into play with what you have described also expands what can be done across larger sets of organizations and maybe even subsets of groups within companies. Any thoughts briefly on where some of the cloud provider scenarios might take this?

Maxwell: It’s funny. Two years ago, when people talked about cloud and data protection, it was just considering the cloud as a target. I would back up the cloud or replicate the cloud. Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud and then maybe even replicate it or back it up back to on-prem, which is kind of a novel concept if you think about it.

If you host something up in cloud, you can back it up locally up there and then actually keep a copy on-prem. Also, the cloud is where we're certainly looking at having generic support for being able to do failover into the cloud and working with various service providers where you can pre-provision, for example, VMs out there.

You're replicating data. You sense that you have had a failure, and all you have to do is, via software, bring up those VMs, pointing them at the disk replicas you put up there.

Different cloud providers

Then, there's the concept of what you do if a certain percentage of all your IT apps are hosted in cloud by different cloud providers. Do you want to be able to replicate the data between cloud vendors? Maybe you have data that's hosted at Amazon Web Services. You might want to replicate it to Microsoft Azure or vice versa or you might want to replicate it on-premise (on-prem).

So there's going to be a lot of neat hybrid options. The hybrid cloud is going to be a topic that we're going to talk about a lot now, where you have that mixture of on-prem, off-prem, hosted applications, etc., and we are preparing for that.

Gardner: I'm afraid we're about out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on the relationship between increasingly higher levels of virtualization and the need for new backup and recovery strategies.

We've seen how solving data complexity and availability in the age of high virtualization is making always attainable data the most powerful asset that an IT organization can deliver to its users.

I'd like to thank our guest. We've been joined by John Maxwell, Vice President of Product Management and Data Protection at Quest Software.

The cloud is where we're certainly looking at having generic support for being able to do failover into the cloud.

John, would you like to add anything else, maybe in terms of how organizations typically get started. This does seem like a complex undertaking. It has many different entry points. Are there some best practices you've seen in the market about how to go about this, or at least to get going?

Maxwell: The number one thing is to find a partner. At Quest, we have hundreds of technology partners that can help companies architect a strategy utilizing the Quest data protection solutions.

Again, choose a solution that hits all the key points. In the case of VMware, you can go to VMware’s site and look for VMware Ready-Certified Solutions. Same thing with Microsoft, whether it’s Windows Server 2008 or 2012 certified. Make sure that you are getting a solution that’s truly certified. A lot of products say they support virtual environments, but then they don’t have that real certification, and a result, they can’t do lot of the innovative things that I’ve been talking about .

So find a partner who can help, or, we at Quest can certainly help you find someone who can help you architect your environment and even implement the software for you, if you so choose. Then, choose a solution that is blessed by the appropriate vendor and has passed their certification process.

Gardner: I should also point out that VMworld is coming up next week. I expect that you'll probably have a big presence there, and a lot of the information that we have been talking about will be available in more detail through the VMworld venue or event.

Maxwell: Absolutely, Dana. Quest will have a massive presence at VMworld, both in San Francisco and Barcelona. We'll be demonstrating technologies we have today and also we will be making some major announcements and previewing some real exciting software at the show.

Gardner: Well, great. This is Dana Gardner, Principal Analyst at Interarbor Solutions. I'd like to thank our audience for listening, and invite them to come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Transcript of a BriefingsDirect podcast on the relationship between increased virtualization and the need for data backup and recovery. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Wednesday, June 06, 2012

Data Explosion and Big Data Demand New Strategies for Data Management, Backup and Recovery, Say Experts

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on why businesses need a better approach to their data recovery capabilities. We'll examine how major trends like virtualization, big data, and calls for comprehensive and automated data management, are driving the need for change.

The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environments and sidesteps the heightened real-time role that data now plays in enterprise.

What's needed are next-generation, integrated, and simplified approaches, the fast backup and recovery that spans all essential corporate data. The solution therefore means bridging legacy and new data, scaling to handle big data, implementing automation and governance, and integrating the functions of backup protection and DR.

The payoffs come in the form of quicker access to needed data and analytics, highly protected data across its lifecycle, ease in DR, and overall improved control and management of key assets, especially by non-specialized IT administrators.

To share insights into why data recovery needs a new approach and how that can be accomplished, we're joined by two experts, first John Maxwell, Vice President of Product Management for Data Protection at Quest Software. Welcome to the show, John. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

John Maxwell: Thank you. Glad to be here.

Gardner: We're also here with Jerome Wendt, President and Lead Analyst of DCIG, an independent storage analyst and consulting firm. Welcome, Jerome.

Jerome Wendt: Thank you, Dana. It's a pleasure to join the call.

Gardner: My first question to you, Jerome. I'm sensing a major shift in how companies view and value their data assets. Is data really a different thing than, say, five years ago in terms of how companies view it and value it?

Wendt: Absolutely. There's no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi-structured format, such as email, was where all the focus was. Clearly, in the last few years, we've seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data.

But we're also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area, but all you have to do is look around and see the growth.

When you go to any tech conference, you see iPads everywhere, and people are storing more data in the cloud. That's going to have an impact on how people and organizations manage their data and what they do with it going forward.

Gardner: John Maxwell, it seems that not that long ago, data was viewed as a byproduct of business. Now, for more and more companies, data is the business, or at least the analytics that they derive from it. Has this been a sea change, from your perspective?

Mission critical

Maxwell: It’s funny that you mention that, because I've been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what you're talking about, the shift and the importance of data.

Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. That's pretty profound, in that a company is saying half the data that we have, we can't live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.

Gardner: So we have a situation where more data is considered important, they need it faster, and they can't do without it. It’s as if our dependency on data has become heightened and ever-increasing. Is that a fair characteristic, Jerome?

Wendt: Absolutely.

Gardner: So given the requirement of having access to data and it being more important all the time, we're also seeing a lot of shifting on the infrastructure side of things. There's much more movement toward virtualization, whole new ways of storage when it comes to trying to reduce the overall cost of that, reducing duplication and that sort of business. How is the shift and the change in infrastructure impacting this simultaneous need for access and criticality? Let's start with you, John.

Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact of virtualization. This year, well over 50 percent of all the server images in the world are virtualized images, which is just phenomenal.

Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. We've seen that evolve from when VMs or virtual infrastructure were used more for test and dev. Today, I've seen studies that show that the shops that are virtualized are running SQL Server, Microsoft Exchange, very mission-critical apps.

We have some customers at Quest that are 100 percent virtualized. These are large organizations, not just some mom and pop company. That shift to virtualization has really made companies assess how they manage it, what tools they use, and their approaches. Virtualization has a large impact on storage and how you backup, protect, and restore data.

Gardner: John, it sounds like you're saying that it's an issue of complexity, but from a lot of the folks I speak to, when they get through it at the end of their journey through virtualization, they find that there are a lot of virtuous benefits to be extended across the data lifecycle. Is it the case that this is not all bad news, when it comes to virtualization?

Maxwell: No. Once you implement and have the proper tools in place, your virtual life is going to be a lot easier than your physical one from an IT infrastructure perspective. A lot of people initially moved to virtualization as a cost savings, because they had under-utilization of hardware. But one of the benefits of virtualization is the freedom, the dynamics. You can create a new VM in seconds. But then, of course, that creates things like VM sprawl, the amount of data continues to grow, and the like.

At Quest we've adapted and exploited a lot of the features that exist in virtual environments, but don't exist in physical environments. It’s actually easier to protect and recover virtual environments than it is physical, if you have tools that are exploiting the APIs and the infrastructure that exists in that virtual environment.

Significant benefits

Gardner: Jerome, do you concur that, when you are through the journey, when you are doing this correctly, that a virtualized environment gives you significant benefits when it comes to managing data from a lifecycle perspective?

Wendt: Yes, I do. One of the things I've clearly seen is that it really makes it more of a business enabler. We talk a lot these days about having different silos of data. One application creates data that stays over here. Then, it's backed up separately. Then, another application or another group creates data back over here.

Virtualization not only means consolidation and cost savings, but it also facilitates a more holistic view into the environment and how data is managed. Organizations are finally able to get their arms around the data that they have.

Get the free data protection and recovery white paper from IDC.

Before, it was so distributed that they didn't really have a good sense of where it resided or how to even make sense of it. With virtualization, there are initial cost benefits that help bring it altogether, but once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

Gardner: I suppose the key now is to be able to manage, automate, and bring the comprehensive control and governance to this equation, not just the virtualized workloads, but also of course the data that they're creating and bringing back into business processes.

Once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

So what about that? What’s this other trend afoot? How do we move from sprawl to control and make this flip from being a complexity issue to a virtuous adoption and benefits issue? Let's start with you, John.

Maxwell: Over the years, people had very manual processes. For example, when you brought a new application online or added hardware, server, and that type of thing, you asked, "Oops, did we back it up? Are we backing that up?"

One thing that’s interesting in a virtual environment is that the backup software we have at Quest will automatically see when a new VM is created and start backing it up. So it doesn't matter if you have 20 or 200 or 2,000 VMs. We're going to make sure they're protected.

Where it really gets interesting is that you can protect the data a lot smarter than you can in a physical environment. I'll give you an example.

In a VMware environment, there are services that we can use to do a snapshot backup of a VM. In essence, it’s an immediate backup of all the data associated with that machine or those machines. It could be on any generic kind of hardware. You don’t need to have proprietary hardware or more expensive software features of high-end disk arrays. That is a feature that we can exploit built within the hypervisor itself.

Image backup

ven the way that we move data is much more efficient, because we have a process that we pioneered at Quest called "backup once, restore many," where we create what's called image backup. From that image backup I can restore an entire system, individual file, or an application. But I've done that from that one path, that one very effective snapshot-based backup.

If you look at physical environments, there is the concept of doing physical machine backups and file level backups, specific application backups, and for some systems, you even have to employ a hardware-based snapshots, or you actually had to bring the applications down.

So from that perspective, we've gotten much more sophisticated in virtual environments. Again, we're moving data by not impacting the applications themselves and not impacting the VMs. The way we move data is very fast and is very effective.

Gardner: Jerome, when we start to do these sorts of activities, whether we are backing up at very granular level or even thinking about mirroring entire data centers, how does governance, management, and automation come to play here? Is this something that couldn’t have been done in the physical domain?

Wendt: I don’t think it could have been done on the physical domain, at least not very easily. We do these buyer’s guides on a regular basis. So we have a chance to take in-depth looks at all these different backup software products on the market and how they're evolving.

There's much more awareness of what data is included in these data repositories and how they're searched.

One of the things we are really seeing, also to your point, is just a lot more intelligence going into this backup software. They're moving well beyond just “doing backups” any more. There's much more awareness of what data is included in these data repositories and how they're searched.

And also with more integration with platforms like vSphere Center, administrators can centrally manage backups, monitor backup jobs, and do recoveries. One person can do so much more than they could even a few years ago.

And really the expectation of organizations is evolving that they don’t want to necessarily want separate backup admin and system admin anymore. They want one team that manages their virtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern, manage, and execute on corporate objectives.

Gardner: I think it’s important to try to filter how this works than in terms of total cost. If you're adding, as you say, more intelligence to the process, if you don’t have separate administrators for each function, if you are able to provide a workflow approach to your data lifecycle, you have fewer duplications, you're using less total storage, you're able to support the requirements of the applications and so on. Is this really a case, John Maxwell, where we are getting more and paying less?

Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, the effectiveness of the software and what it can do is way beyond what we had 10 years ago.

Simplified process

Today, in a virtual environment, we can provide a solution that simplifies the process, where one person can ensure that hundreds of VMs are protected. They can literally right-click and restore a VM, a file, a directory, or an application.

One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consider email one of their most mission-critical applications. And the person responsible for protecting the environment that Microsoft Exchange is running on, may not be an Exchange administrator, but maybe they're tasked with being able to recover Exchange.

That’s why we've developed technologies that allow you to go out there, and from that one image backup, restore an email conversation or an attachment email from someone’s mailbox. That person doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, figure how to do this and make this available via a couple of mouse-clicks.

Gardner: So we're moving the administration app’s direction up, rather than app by app, server by server. We're really looking at it as the function of what you want to do with that data. That strikes me as a big deal. Is that a whole new thing that we're doing with data, Jerome?

Wendt: Yes, it is. As John was speaking, I was going to comment. I spoke to a Quest customer just a few weeks ago. He clearly had some very specific technical skills, but he's responsible for a lot of things, a lot of different functions -- server admin, storage admin, backup admin.

You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

I think a lot of individuals can relate to this guy. I know I certainly did, because that was my role for many years, when I was an administrator in the police department. You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

In his particular case, he was called upon to do a recovery, and, to John’s point, it was an Exchange recovery. He never had any special training in Exchange recovery, but it just happened that he had Quest Software in place. He was able to use its FastRecover product to recover his Microsoft Exchange Server and had it back up and going in a few hours.

What was really amazing, in this particular case, is that he was traveling at the time it happened. So he had to talk to his manager through the process, and was able to get it up and going. Once he had the system up, he was able to log on and get it going fairly quickly.

That just illustrates how much the world has changed and how much backup software and these products have evolved to the point where you need to understand your environment, probably more than you need to understand the product, and just find the right product for your environment. In this case, this individual clearly accomplished that.

Gardner: It sounds like you're moving more to be an architect than a carpenter, right?

Wendt: Exactly.

Gardner: So we understand that management is great and oversight at that higher abstraction is going to get us a lot of benefits. But we mentioned earlier that some folks are at 20 percent virtualization, while others are at 90 percent. Some data is mission-critical, while other doesn't require the same diligence, and that's going to vary from company to company.

Hybrid model

o my question to you, John Maxwell, is how do organizations approach this being in a hybrid sort of a model, between physical and virtual, and recognizing that different apps have different criticality for their data, and that might change. How do we manage the change? How do we get from the old way of doing this into these newer benefits?

Maxwell: Well, there are two points. One, we can't have a bunch of niche tools, one for virtual, one for physical, and the like. That's why, with our vRanger product, which has been the market leader in virtual data protection for the past seven years, we're coming out with physical support in that product in the fall of 2012. Those customers are saying, "I want one product that handles that non-virtualized data."

The second part gets down to what percentage of your data is mission-critical and how complex it is, meaning is it email, or a database, or just a flat file, and then asking if these different types of data have specific service-level agreements (SLAs), and if you have products that can deliver on those SLAs.

That's why at Quest, we're really promoting a holistic approach to data protection that spans replication, continuous data protection, and more traditional backup, but backup mainly based on snapshots.

Then, that can map to the service level, to your business requirements. I just saw some data from an industry analyst that showed the replication software market is basically the same size now as the backup software market. That shows the desire for people to have kind of that real-time failover for some application, and you get that with replication.

We can't have a bunch of niche tools, one for virtual, one for physical, and the like.

When it comes to the example that Jerome gave with that customer, the Quest product that we're using is NetVault FastRecover, which is a continuous data protection product. It backs up everything in real-time. So you can go back to any point in time.

It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, or Oracle database. Yet, it's masking a lot of the complexity. So the person restoring it may not be a DBA. They're going to be that jack of all trades who's responsible for the storage and maybe backup overall.

Gardner: Jerome, what are you seeing in the field? Are there folks that are saying, "Okay, the value here is so compelling and we have such a mess, we're going to bite the bullet and just go totally virtual in three to sixth months. And, at least for our mission-critical apps, we're going to move them over into this data lifecycle approach for our recovery, backup, and DR?"

Or are you seeing companies that are saying, "Well, this is a five year plan and we're going to do this first and we are going to kind of string it out?" Which of these seems to be in vogue at the moment? What works, a bite the bullet, all or nothing, or the slow crawl-walk-run approach?

Wendt: It really depends on the size of the organization you're talking about. When I talk to small and medium size businesses (SMBs), 500-1,000 employees or fewer, they may have 100 terabyte of storage and may have 200 servers. I see them just biting the bullet. They're doing the three- to six-month approach. Let's make the conversion, do the complete switchover, and go virtual as much as possible.

Few legacy systems

Almost all of them have a few legacy systems. They may be running some application on Windows 2000 Server or some old version of AIX. Who knows what a lot of companies have running in the background? They can't just virtualize everything, but where they can, they get to a 98 percent virtualized environment.

When you start getting to enterprises, I see it a little bit different. It's more of a staged approach, because it just takes more coordination across the enterprise to make it all happen. There are a lot more logistics and planning going on.

I haven’t talked to too many that have taken five years to do it. It's mostly two to maybe four years at the outside range. But the move is to virtualize as much as possible, except for those legacy apps, which for some reason they can't tackle.

Gardner: John Maxwell, for those two classes of user, what does Quest suggest? Is there a path that you have for those who want to do it as rapidly as possible? And then is that metered approach also there in terms of how you support the journey?

Maxwell: It's funny that you mention the difference between SMB and the enterprise. I'm a firm believer that one size doesn’t fit all, which is why we have solutions for specific markets. We have solutions for the SMB along with enterprise solutions, but we do have a lot of commonality between the products. We're even developing for our SMB product a seamless upgrade path to our enterprise-class product.

One size doesn’t fit all, which is why we have solutions for specific markets.

Again, they're different markets, just as Jerome said. We found exactly what he just mentioned, which is the smaller accounts tend to be more homogenous and they tend to virtualize a lot more, whereas in the enterprise they're more heterogeneous and they may have a bigger mix of physical and virtual.

And they may have really more complex systems. That’s where you run into big data and more complex challenges, when it comes to how you can back data up and how you can recover it. And there are also different price points.

So our approach is to have solution specific to the SMB and specific to the enterprise. There is a lot of cross-functionality that exists in the products, but we're very crisp in our positioning, our go-to-market strategy, the price points, and the features, because one of the things you don’t want to do with SMB customers is overwhelm them.

Get the free data protection and recovery IDC white paper.

I meet hundreds of customers a year, and one of our top customers has an exabyte of data. Jerome, I don’t know if you talk to many customers that have exabyte, but I don’t really run into a lot of customers that have an exabyte of data. Their requirements are completely different than our average vRanger customer, who has around five terabytes of data.

We have products that are specific to the market segments, to specification or non-specification of that user, and at the price point. Yet, it's one vendor, one throat to choke, and there are paths for upgrade if you need to.

Gardner: John, in talking with Quest folks, I've heard them refer to a next-generation platform or approach, or a whole greater than the sum of the parts. How do you define next generation when it comes to data recovery in your view of the world?

New benefits

Maxwell: Well, without hyperbole, for us, our next generation is a new platform that we call NetVault Extended Architecture (XA), and this is a way to provide several benefits to our customers.

One is that with NetVault Extended Architecture we now are delivering a single user experience across products. So this gets into SMB-versus-enterprise for a customer that’s using maybe one of our point solutions for application or database recovery, providing that consistent look and feel, consistent approach. We have some customers that use multiple products. So with this, they now have a single pane of glass.

Also, it's important to offer a consistent means for administering and managing the backup and recovery process, because as we've been talking, why should a person have to have multiple skill sets? If you have one view, one console into data protection, that’s going to make your life a lot easier than have to learn a bunch of other types of solutions.

That’s the immediate benefit that I think people see. What NetVault Extended Architecture encompasses under the covers, though, is a really different approach in the industry, which is modularization of a lot of the components to backup and recovery and making them plug and play.

Let me give you an example. With the increase in virtualization a lot of people just equate virtualization with VMware. Well, we've got Hyper-V. We have initiatives from Red Hat. We have Xen, Oracle, and others. Jerome, I'm kind of curious about your views, but just as we saw in the 90s and in the 00s, with people having multiple platforms, whether it's Windows and Linux or Windows and Linux and, as you said, AIX, I believe we are going to start seeing multiple hypervisors.

It's important to offer a consistent means for administering and managing the backup and recovery process

So one of the approaches that NetVault Extended Architecture is going to bring us is a capability to offer a consistent approach to multiple hypervisors, meaning it could be a combination of VMware and Microsoft Hyper-V and maybe even KVM from Red Hat.

But, again, the administrator, the person who is managing the backup and recovery, doesn’t have to know any one of those platforms. That’s all hidden from them. In fact, if they want to restore data from one of those hypervisors, say restore a VMware as VMDK, which is their volume in VMware speak, into what's called a VHD and a Hyper-V, they could do that.

That, to me, is really exciting, because this is exploiting these new platforms and environments and providing tools that simplify the process. But that’s going to be one of the many benefits of our new NetVault Extended Architecture next generation, where we can provide that singular experience for our customer base to have a faster go-to-market, faster time to market, with new solutions, and be able to deliver in a modular approach.

Customers can choose what they need, whether they're an SMB customer, or one of the largest customers that we have with hundreds of petabytes or exabytes of data.

Wendt: I'd like to elaborate on what John just said. I'm really glad to hear that’s where Quest is going, John, I haven’t had a chance to discuss this with you guys, but DCIG has a lot of conversations with managed-service providers, and you'd be surprised, but there are actually very few that are VMware shops. I find the vast majority are actually either Microsoft Hyper-V or using Red Hat Linux as their platform, because they're looking for a cost-effective way to deliver virtualization in their environments.

We've seen this huge growth in replication, and people want to implement disaster recovery plans or business continuity planning. I think this ability to recover across different hypervisors is going to become absolutely critical, maybe not today or tomorrow, but I would say in the new few years. People are going to say, "Okay, now that we've got our environment virtualized, we can recover locally, but how about recovering into the cloud or with a cloud service provider? What options do we have there?"

More choice

If they're using VMware and their provider isn’t, they're almost forced to use VMware or something like this, whereas your platform gives them much more choice for managed service providers that are using platforms other than VMware. It sounds like Quest will really give them the ability to backup VMware hypervisors and then potentially recover into Red Hat or Microsoft Hyper-V at MSPs. So that could be a really exciting development for Quest in that area.

Gardner: So being able to support the complexity and the heterogeneity, whether it's at the application level, the platform level, the VM, and hypervisor level, all of that is part and parcel of extracting data recovery to the manage and architected level.

Do we have any examples, John, of companies that are already doing that? Are you are familiar with organizations -- maybe you can name them -- that are doing just that, managing a heterogeneity issue and coming up with some metrics of success for their data recovery and data management and lifecycle approach, as a result?

Maxwell: I'd like to give you an example of one customer, one of our European customers called CMC Markets. They use our entire NetVault family of products, both the core NetVault Backup product and the NetVault FastRecover product that Jerome mentioned.

They are a company where data is their lifeblood. They're an options trading company. They process tens of thousands of transactions a day. They have a distributed environment. They have their main data center in London, and that’s where their network operations center is. Yet, they have eight offices around the world.

One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it.

One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it. And the answer is no. With big data, there's no way that you will have enough time in a day to make that happen. With remote data, you want to put something that’s manual out in that remote office, where you're not going to have IT people.

CMC Markets has come to this approach of move data smarter, versus harder. They've implemented our NetVault FastRecover product, where it’s backed up to disk at their remote sites. Then, the product automatically replicates its backups to the home office in London.

Then, for some of their more mission-critical data in the London data center, databases such as SQL Server and Oracle, they do real-time backup. So they're able to recover the data at any point in time, literally within seconds. We have 17 patents on this product, most of them around a feature we call Flash Restore, that allows you to get an application up and running in less than 30 seconds.

But the real-life example is pretty interesting in that, one of their remote offices is in Tokyo. If you go back to March 11, 2011, when the 9.+ earthquakes happened, the tsunami, they lost power. They had damage to some of their server racks.

Since they were replicating in London and those backups were done locally in Tokyo, they actually got their employees up and running using Terminal Server, which enables the Tokyo employees to connect to the applications that had been recovered in London, because they had copies of those backups. So there was no disruption to their business.

Second problem

nd, as luck will have it, two weeks later, they had a problem at one of the other remote offices, where a server crashed, and then they were able to bring up data remotely. Then, they had another instance, where they had to just recover data. Because it was so quick, end-users didn’t even know that disk drive had crashed.

So I think that's a really neat example of a customer who is exploiting today’s technology. This gets back to the discussion we had earlier about service levels and managing of service levels in the business and making sure there's not a disruption of the business. If you're doing real-time trades in a stock exchange type of environment, you can't suffer any outages, because there's not only the monetary problems, but you don’t want to be in the cover of

Gardner: Of course regulation and compliance issues to consider.

Maxwell: Absolutely.

Gardner: We're getting towards the end of our time. Jerome, quickly, do you have any use cases or examples that you're familiar with that illustrate this concept of next-generation and lifecycle approach to data recovery that we have been discussing?

Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most of DCIG’s focus is just on analyzing the products themselves and comparing, traversing, and identifying general broader trends within those products.

Going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

There are two things we're seeing. One, we're struggling calling backup software backup software anymore, because it does so much more than that. You mentioned earlier about so much more intelligence in these products. We call it backup software, because that’s the context in which everyone understands it, but I think going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

And then second, people, as they view backup and how they manage their infrastructure, really have to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobs that failed overnight." Those days are over. And if they're not over, you need to be looking for new products that will get you over that hump, because you should no longer be troubleshooting failed backup jobs.

You should be really looking more toward, how you can make sure all your environment is protected, recoverable, and really moving to the next phase of doing disaster recoveries and business continuity planning. The products are there. They are mature and people should be moving down that path.

Gardner: Jerome, we mentioned at the outset, mobile and the desire to deliver more data and applications to edge devices, and of course cloud was mentioned. People are going to be looking to take advantage of cloud efficiencies internally, but then also look to mixed-sourcing opportunities, hybrid-computing opportunities, different apps from different places, and the data lifecycle and backup that needs to be part and parcel with that.

We also mentioned the fact that big data is more important and that the timeframe of getting mission-critical data to the right people is shortening all the time. This all pulls together, for me, this notion that in the future you're not going to be able to do this any other way. This is not a luxury, but a necessity. Is that fair, Jerome?

Wendt: Yes, it is. That’s a fair assessment.

Crystal ball

Gardner: John, the same question to you basically. When we look into the crystal ball, even not that far out, it just seems that in order to manage what you need to do as a business, getting good control over your data, being able to ensure that it’s going to be available anytime, anywhere, regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just going to support the viability of the business.

Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud, because what's your control, as a business, over data that is hosted some place else?

I know that at Quest we use seven SaaS-based applications from various vendors, but what’s our guarantee that our data is protected there? I can tell you that a lot of these SaaS-based companies or hosting companies may offer an environment that says, "We're always up," or "We have a higher level of availability," but most recovery is based on logical corruption of data.

As I said, with some of these smaller vendors, you wonder about what if they went out of business, because I have heard stories of small service providers closing the doors, and you say, "But my data is there."

So the cloud is really exciting, in that we're looking at how we're going to protect assets that may be off-premise to your environment and how we can ensure that you can recover that data, in case that provider is not available.

Then there's something that Jerome touched upon, which is that the cloud is going to offer so many opportunities, the one that I am most excited about is using the cloud for failover. That really getting beyond recovery into business continuity.

Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could.

And something that has only been afforded by the largest enterprises, Global 1000 type customers, is the ability to have a stand up center, a SunGard or someone like that, which is very costly and not within reach of most customers. But with virtualization and with the cloud, there's a concept that I think we're going to see become very mainstream over the next five years, which is failover recovery to the cloud. That's something that’s going to be within reach of even SMB customers, and that’s really more of a business continuity message.

So now we're stepping up even more. We're now saying, "Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could."

Gardner: That sounds like a good topic for another day. I am afraid we are going to have to leave it there.

You've been listening to a sponsored BriefingsDirect podcast discussion on the value around next-generation, integrated and simplified approaches to fast backup and recovery. We have seen how a comprehensive approach to data recovery bridges legacy and new data, scales to handle big data, and provides automation and governance across the essential functions of backup, protection, and disaster recovery.

I'd like to thank our guests. We've been joined by John Maxwell, the Vice President of Product Management for Data Protection at Quest Software. Thanks so much, John.

Maxwell: Thank you.

Gardner: We've also been joined by Jerome Wendt, President and Lead Analyst at DCIG, an independent storage analyst and consulting firm. Thanks so much, Jerome.

Wendt: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to you, our audience, for listening, and come back next time.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: