Showing posts with label data management. Show all posts
Showing posts with label data management. Show all posts

Tuesday, April 09, 2013

Agnostic Tool Chain Approach Proves Key to Fixing Broken State of Data and Information Management

Transcript of a BriefingsDirect podcast on how Dell Software is working with companies to manage internal and external data in all its forms.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Dell Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on better understanding the biggest challenges businesses need to solve when it comes to data and information management.

We'll examine how a data dichotomy has changed the face of information management. This dichotomy means that organizations, both large and small, not only need to manage all of their internal data that provides intelligence about their businesses, but they also need to manage the reams of increasingly external big data that enables them to discover new customers and drive new revenue.

Lastly, our discussion will focus on bringing new levels of automation and precision to the task of solving data complexity by embracing an agnostic, end-to-end tool chain approach to overall data and information management.

Here now to share his insights on where the information management market has been and where it's going, we're joined by Matt Wolken, Executive Director and General Manager for Information Management at Dell Software. Welcome, Matt. [Disclosure: Dell Software is a sponsor of BriefingsDirect podcasts.]

Matt Wolken: Dana, thanks for having me. I appreciate it.

Gardner: From your perspective, what are the biggest challenges that businesses need to solve now when it comes to data and information management? What are the big hurdles that they're facing?

Wolken: It's an interesting question. When we look at customers today, we're noticing how their environments have significantly changed from maybe 10 or 15 years ago.

Wolken
About 10 or 15 years ago, the problem was that data was sitting in individual databases around the company, either in a database on the backside of an application, the customer relationship management (CRM) application, the enterprise resource planning (ERP) application, or in data marts around the company. The challenge was how to bring all this together to create a single cohesive view of the company?

That was yesterday's problem, and the answer was technology. The technology was a single, large data warehouse. All of the data was moved to it, and you then queried that larger data warehouse where all of the data was for a complete answer about your company.

What we're seeing now is that there are many complexities that have been added to that situation over time. We have different vendor silos with different technologies in them. We have different data types, as the technology industry overall has learned to capture new and different types of data -- textual data, semi-structured data, and unstructured data -- all in addition to the already existing relational data. Now, you have this proliferation of other data types and therefore other databases.

The other thing that we notice is that a lot of data isn't on premise any more. It's not even owned by the company. It's at your software-as-a-service (SaaS) provider for CRM, your SaaS provider for ERP, or your travel or human resources (HR) provider. So data again becomes siloed, not only by vendor and data type, but also by location. This is the complexity of today, as we notice it.

Cohesive view

All of this data is spread about, and the challenge becomes how do you understand and otherwise consume that data or create a cohesive view of your company? Then there is still the additional social data in the form of Twitter or Facebook information that you wouldn't have had in prior years. And it's that environment, and the complexity that comes with it, that we really would like to help customers solve.

Gardner: When it comes to this so-called data dichotomy, is it oversimplified to say it's internal and external, or is there perhaps a better way to categorize these larger sets that organizations need to deal with?

Wolken: There's been a critical change in the way companies go about using data, and you brought it out a little bit in the intro. There are some people who want to use data for an outcome-based result. This is generally what I would call the line-of-business concern, where the challenge with data is how do I derive more revenue out of the data source that I am looking at?

What's the business benefit for me examining this data? Is there a new segment I can codify and therefore market to? Is there a campaign that's currently running that is not getting a good response rate, and if so, do I want to switch to another campaign or otherwise improve it midstream to drive more real value in terms of revenue to the company?

That’s the more modern aspect of it. All of the prior activities inside business intelligence (BI) -- let’s flip those words around and say intelligence about the business -- was really internally focused. How do I get sanctioned data off of approved systems to understand the official company point of view in terms of operations?
How do I go out and use data to derive a better outcome for my business?

That second goal is not a bad goal. That's still a goal that's needed, and IT is still required to create that sanctioned data, that master data, and the approved, official sources of data. But there is this other piece of data, this other outcome that's being warranted by the line of business, which is, how do I go out and use data to derive a better outcome for my business? That's more operationally revenue-oriented, whereas the internal operations are around cost orientation and operations.

So where you get executive dashboards for internal consumption off of BI or intelligence for the business, the business units themselves are about visualization, exploration, and understanding and driving new insights.

It's a change in both focus and direction. It sometimes ends up in a conflict between the groups, but it doesn't really have to be that way. At least, we don't think it does. That's something that we try to help people through. How do you get the sanctioned data you need, but also bring in this third-party data and unstructured data and add nuance to what you are seeing about your company.

Gardner: Just as 10 or 15 years ago the problem to solve was the silos of data within the organization, is there any way in traditional technology offerings that allows this dichotomy to be joined now, or do we need a different way in which to create insights, using both that internal and external type of information?

Wolken: There are certainly ways to get to anything. But if you're still amending program after program or technology after technology, you end up with something less than the best path, and there might be new and better ways of doing things.

Agnostic tool chain

There are lots of ways to take a data warehouse forward in today's environment, manipulate other forms of data so it can enter a data warehouse or relational data warehouse, and/or go the other way and put everything into an unstructured environment, but there's also another way to approach things, and that’s with an agnostic tool chain.

Tools have existed in the traditional sense for a long time. Generally, a tool is utilized to hide complexity and all of the issues underneath the tool itself. The tool has intelligence to comprehend all of the challenges below it, but it really abstracts that from the user.

We think that instead of buying three or four database types, a structured database, something that can handle text, a solution that handles semi-structured or structured, or even a high performance analytical engine for that matter, what if the tool chain abstracts much of that complexity? This means the tools that you use every day can comprehend any database type, data structure type, or any vendor changes or nuances between platforms.

That's the strategy we’re pursuing at Dell. We’re defining a set of tools, not the underlying technologies or proliferation of technologies, but the tools themselves, so that the day-to-day operations are hidden from the complexity of those underlying sources of vendor, data type, and location.
We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

That's how we really came at it -- from a tool-chain perspective, as opposed to deploying additional technologies. We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

Gardner: Am I right then in understanding that this is at more of a meta level, above the underlying technologies, but that, in a sense, makes the whole greater than the sum of the parts of those technologies?

Wolken: That’s a fair way of looking at it. Let's just take data integration as a point. I can sometimes go after certain siloed data integration products. I can go after a data product that goes after cloud resources. I can get a data product that only goes after relational. I can get another data product to extract or load into Hive or Hadoop. But what if I had one that could do all of that? Rather than buying separate ones for the separate use cases, what if you just had one?

Metadata, in one way, is a descriptor language, if I use it in that sense. Can I otherwise just see and describe everything below it, or can I actually manipulate it as well? So in that sense, it's a real tool to actually manipulate and cause the effective change in the environment.

Gardner: I'd like to go into more of the challenges, but before we do that, what are the stakes here? What do you get if you do this right? If you can, in fact, manage across various technology types and formats, across relational and unstructured data, internal and external data sources and providers.

Are we talking iterative change, a step change, or is it something that is a bit larger and that we might have some other examples of companies when they do this well can really demonstrate something perhaps quite unique in terms of a new level of accomplishment?

Institutional knowledge

Wolken: There are a couple of ways we think about it, one of which is institutional knowledge. Previously, if you brought in a new tool into your environment to examine a new database type, you would probably hire a person from the outside, because you needed to find that skill set already in the market in order to make you productive on day one.

Instead of applying somebody who knows the organization, the data, the functions of the business, you would probably hire the new person from the outside. That's generally retooling your organization.

Or, if you switch vendors, that causes a shift as well. One primary vendor stack is probably a knowledge and domain of one of your employees, and if you switch to another vendor stack or require another vendor stack in your environment, you're probably going to have to retool yet again and find new resources. So that's one aspect of human knowledge and intelligence about the business.

There is a value to sharing. It's a lot harder to share across vendor environments and data environments if the tools can't bridge them. In that case, you have to have third-party ways to bridge those gaps between the tools. If you have sharing that occurs natively in the tool, then you don't have to cross that bridge, you don't have the delay, and you don't have the complexity to get there.

So there is a methodology within the way you run the environment and the way employees collaborate that is also accelerated. We also think that training is something that can benefit from this agnostic approach.
You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

But also, generically, if you're using the same tools, then things like master data management (MDM) challenges become more comprehensive, if the tool chain understands where that MDM is coming from, and so on.

You also codify how and where resources are shared. So if you have a person who has to provision data for an analyst, and they are using one tool to reach to relational data, another to reach into another type of data, or a third-party tool to reach into properties and SaaS environments, then you have an ineffective process.

You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

So those are some of the high-level ideas. That's why we think there's value there. If you go back to what would have existed maybe 10 or 15 years ago, you had one set of staff who used one set of tools to go back against all relational data. It was a construct that worked well then. We just think it needs to be updated to account for the variance within the nuances that have come to the fore as the technology has progressed and brought about new types of technology and databases.

Gardner: As for business benefits, we hear a lot about businesses being increasingly data driven and information driven, rather than a hunch, intuition, or gut instinct. Also, there's an ability to find new customers in much more cost-effective ways, taking advantage of the social networks, for example. So when you do this well, what are typically some of the business paybacks, and do they outweigh the cost more than previous investments in data would have?

Investment cycles

Wolken: It all depends on how you go about it. There are lots of stories about people who go on these long investment cycles into some massive information management strategy change without feeling like they got anything out of it, or at least were productive or paid back the fee.

There's a different strategy that we think can be more effective for organizations, which is to pursue smaller, bite-size chunks of objective action that you know will deliver some concrete benefit to the company. So rather than doing large schemes, start with smaller projects and pursue them one at a time incrementally -- projects that last a week and then you have 52 projects that you know derive a certain value in a given time period.

Other things we encourage organizations to do deal directly with how you can use data to increase competitiveness. For starters, can you see nuances in the data? Is there a tool that gives you the capability to see something you couldn't see before? So that's more of an analytical or discovery capability.

There's also a capability to just manage a given data type. If I can see the data, I can take advantage of it. If I can operate that way, I can take advantage of it.

Another thing to think about is what I would call a feedback mechanism, or the time or duration of observation to action. In this case, I'll talk about social sentiment for a moment. If you can create systems that can listen to how your brand is being talked about, how your product is being talked about in the environment of social commentary, then the feedback that you're getting can occur in real time, as the comments are being posted.
There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback.

Now, you might think you'll get that anyway. I would have gotten a letter from a customer two weeks from now in the postal system that provided me that same feedback. That’s true, but sometimes that two weeks can be a real benefit.

Imagine a marketing campaign that's currently running in the East, with a companion program in the West that's slightly different. Let's say it's a two-week program. It would be nice if, during the first week, you could be listening to social media and find out that the campaign in the West is not performing as well as the one in the East, and then change your investment thesis around the program -- cancel the one that's not performing well and double down on the one that's performing well.

There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback. When I say modern resources, generally that's pointing towards unstructured data types or textual data types. Again, if you can comprehend and understand those within your overall information management status, you now also have a feedback mechanism that should increase your responsiveness and therefore make your business more competitive as well.

Gardner: I think the whole concept of the immediacy to feedback, applied across various aspects of business -- planning, production, marketing, go-to market, research, and to uses -- then that's been the Holy Grail of business for a long time. It's just been very difficult to do. Now, we seem to be getting closer to the ability to do it at scale and at reasonable cost. So, these are very interesting times.

Now, given that these payoffs could be so substantial, what's preventing people from getting to this Holy Grail? What's between them and the realization?

It's the complexity

Wolken: I think it's complexity of the environment. If you only had relational systems inside your company previously, now you have to go out and understand all of the various systems you can buy, qualify those systems, get pure feedback, have some proofs of concept (POCs) in development, come in and set all these systems up, and that just takes a little bit of time. So the more complexity you invite into your environment, the more challenges you have to deal with.

After that, you have to operate and run it every day. That's the part where we think the tool chain can help. But as far as understanding the environment, having someone who can help you walk through the choices and solutions and come up with one that is best suited to your needs, that’s where we think we can come in as a vendor and add lots of value.

When we go in as a vendor, we look at the customer environment as it was, compare that to what it is today, and work to figure out where the best areas of collaboration can be, where tools can add the most value, and then figure out how and where can we add the most benefit to the user.

What systems are effective? What systems collaborate well? That's something that we have tried to emulate, at least in the tool space. How do you get to an answer? How do you drive there? Those are the questions we’re focused on helping customers answers.

For example, if you've never had a data warehouse before, and you are in that stage, then creating your first one is kind of daunting, both from a price perspective, as well as complexity perspective or know-how. The same thing can occur on really any aspect -- textual data, unstructured data, or social sentiment.
Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Each one of those can appear daunting if you don't have a skill set, or don't have somebody walking you through that process who has done it before. Otherwise, it's trying to put your hands on every bit of data and consume what you can and learning through that process.

Those are some of the things that are really challenging, especially if you're a smaller firm that has a limited number of staff and there's this new demand from the line of business, because they want to go off in a different direction and have more understanding that they couldn't get out of existing systems.

How do you go out and attain that knowledge without duplicating the team, finding new vendor tools, and adding complexity to your environment, maybe even adding additional data sources, and therefore more data-storage requirements. Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Gardner: It's interesting that you mentioned mid-market organizations. Some of these infrastructure and data investments were perhaps completely out of their reach until a new way to approach the problems through the tool chain, through cloud, through other services and on-demand offerings.

What is it now about the new approach to these problems that you think allows the fruits of this to be distributed more down market? Why are mid-market organizations now more able to avail themselves of some of these values and benefits than in the past?

Mid-market skills

Wolken: As the products are well-known, there is more trained staff that understands the more common technologies. There are more codified ways of doing things that a business can take advantage of, because there's a large skill set, and most of the employees may already have that skill set as you bring them into the company.

There are also some advantages just in the way technologies have advanced over the years. Storage used to be very expensive, and then it got a little cheaper. Then solid-state drives (SSD) came along and then that got cheaper as well. There are some price point advantages in the coming years, as well.

Dell overall has maintained the status that we started with when Michael Dell started recreating PCs in his dorm room from standard product components to bring the price down. That model of making technology attainable to larger numbers of people has continued throughout Dell’s history, and we’re continuing it now with our information management software business.

We’re constantly thinking about how we can reduce cost and complexity for our customers. One example would be what we call Quickstart Data Warehouse. It was designed to democratize data to a lower price point, to bring the price and complexity down to a much lower space, so that more people can afford and have their first data warehouse.

We worked with our partner Microsoft, as well as Dell’s own engineering team, and then we qualified the box, the hardware, and the systems to work to the highest peak performance. Then, we scripted an upfront install mechanism that allows the process to be up and running in 45 minutes with little more than directing a couple of IP addresses. You plug the box in, and it comes up in 45 minutes, without you having to have knowledge about how to stand up, integrate, and qualify hardware and software together for an outcome we call a data warehouse.
We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

Another thing we did was include Boomi, which is a connector to automatically go out and connect to the data sources that you have. It's the mechanism by which you bring data into it. And lastly, we included services, in case there were any other questions or problems you had to set it up.

If you have a limited staff, and if you have to go out and qualify new resources and things you don't understand, and then set them up and then actually run them, that’s a major challenge. We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

It's one way vendors like Dell are moving to democratize business intelligence a little further, bring it to a lower price point than customers are accustomed too and making it more available to firms that either didn’t have that luxury of that expertise link sitting around the office, or who found that the price point was a little too high.

Gardner: You mentioned this concept of the tool chain several times. I'd like to hear a bit more about why that approach works, and even more detail about what I understand to be important elements of it -- being agnostic to the data type, holistic management, complete view, and then of course integrate it.

In addition to the package, it sounds from your earlier comments that you want to be able to approach these daunting issues iteratively, so that you can bite off certain chunks. What is it about the tool chain that accomplishes both a comprehensive value, but also allows it to be adopted on a fairly manageable path, rather than all at once?

Wolken: One of the things we find advantageous about entering the market at this point in time is that we're able to look at history, observe how other people have done things over time, and then invest in the market with the realization that maybe something has changed here and maybe a new approach is needed.

Different point of view

Whereas the industry has typically gone down the path of each new technology or advancement of technology requires a new tool, a new product, or a new technology solution, we’ve been able to stand back and see the need for a different approach. We just have a different point of view, which is that an agnostic tool chain can enable organizations to do more.

So when we look at database tools, as an example, we would want a tool that works against all database types, as opposed to one that works against only a single vendor or type of data.

The other thing that we look at is if you walk into an average company today, there are already a lot of things laying around the business. A lot of investment has already been made.

We wanted to be able to snap in and work with all of the existing tools. So, each of the tools that we’ve acquired, or have created inside the company, were made to step into an existing environment, recognize that there were other products already in the environment, and recognize that they probably came from a different vendor or work on a different data type.

That’s core to our strategy. We recognize that people were already facing complexity before we even came into the picture, so we’re focused on figuring out how we snap into what they already have in place, as opposed to a rip-and-replace strategy or a platform strategy that requires all of the components to be replaced or removed in order for the new platform to take its place.
We’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole.

What that means is tools should be agnostic, and they should be able to snap into an environment and work with other tools. Each one of the products in the tool chain we’ve assembled was designed from that point of view.

But beyond that, we’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole. We think that every point where you have agnosticism or every point where you have a tool that can abstract that lower amount of complexity, you have savings.

You have a benefit, whether it’s cost savings, employee productivity, or efficiency, or the ability to keep sanctioned data and a set of tools and systems that comprehend it. The idea being that the entirety of the tool chain provides you with advantages above and beyond what the individual components bring.

Now, we're perfectly happy to help a customer at any point where they have difficultly and any point where our tools can help them, whether it's at the hardware layer, from the traditional Dell way, at the application layer, considering a data warehouse or otherwise, or at the tool layer. But we feel that as more and more of the portfolio – the tool chain – is consumed, more and more efficiency is enabled.

Gardner: It sounds as if rather than look at the ecosystem that’s in place in an organization as a detriment, you're trying to make that into an asset, and then even looking further to new products available to bring that in. So I guess partnering becomes important.

Already-made investment

Wolken: Everything is an already-made investment in the company. If the premise to rip and replace is from the get-go, then you're really removing the institutional knowledge, the training of the staff, and the investment into the product, not to mention maybe the integration work. That's not something we wanted to start out with. We wanted to recognize and leverage what was there and provide value to that already existing environment.

One of the core values that we were looking at from a design point is how do you fit into an environment and how do you add value to it, not how do you cause replacement or destruction of an existing environment in order to provide benefit.

Gardner: We have been talking about the tool chain in terms of its value for analytics and intelligence about the business and bringing in more types of data and information from external sources.

It also sounds to me as if this sets you up for a lifecycle benefits, not just on the business benefits, but also on the IT benefits, for things like a better backup and recovery, a better disaster recovery strategy, perhaps looking towards more storage efficiency. Is there an intramural benefit from the IT side to doing this in the fashion you have been describing as well?

Wolken: We looked at the strategy and said if you manage this as a data lifecycle, and that’s really what we think about it as, then where does data first show up in a company? That’s inside of a database on the backside of an application most likely.
Doing that, you also solve the problem of how to make sure that the data that was provisioned was sanctioned.

And where is it last used inside of a company? That would generally be just before retirement or long-term retention of the data. Then the question becomes how do you manipulate and otherwise utilize the data for the maximum benefit in the middle?

When we looked at that, one of the problems that you uncover is that there's a lot of data being replicated in a lot of places. One of the advantages that we've put together in the tool chain was to use virtualization as a capability, because you know where data came from and you know that it was sanctioned data. There's no reason to replicate that to disk in another location in the company, if you can just reach into that data source and pull that forward for a data analyst to utilize.

You can virtually represent that data to the user, without creating a new repository for that person. So you're saving on storage and replication costs. So if you’re looking for where is there efficiency in the lifecycle of data and how can you can cut some of those costs, that’s something that jumps right out.

Doing that, you also solve the problem of how to make sure that the data that was provisioned was sanctioned. By doing all of these things, by creating a virtual view, then providing that view back to the analyst, you're really solving multiple pieces of the puzzle at the same time. It really enables you to look at it from an information-management point of view.

Gardner: That's interesting, because you can not only get better business outcome benefits and analytics benefits, but you can simplify and reduce your total cost of ownership from the IT perspective. That's kind of another Holy Grail out there, to be able to do more with less.

One of the advantages

Wolken: That's what we think one of the advantages can be, and certainly, as you have the advantage to stand on the shoulders of people who have come before you and look at how the environment’s changed, you can notice some of these real minor changes and bring them forward. That's what we want to do with IT as partners and with the solution that we bring forward.

Gardner: How should enterprises and mid-market firms get started? Are there some proven initiation points, methods, or cultural considerations when one wants to move from that traditional siloed platform and integrate them along the way, an approach more towards this integrated, comprehensive tool-chain approach?

Wolken: There are different ways you can think about it. Generally, most companies aren’t just out there asking how they can get a new tool chain. That's not really the strategy most people are thinking about. What they are asking is how do I get to the next stage of being an intelligent company? How do I improve my maturity in business intelligence? How would I get from Excel spreadsheets without a data warehouse to a data warehouse and centralized intelligence or sanctioned data?

Each one of these challenges come from a point of view of, how do I improve my environment based upon the goals and needs that I am facing? How do I grow up as a company and get to be more of a data-based company?

Somebody else might be faced with more specific challenges, such a line of business is now asking me for Twitter data, and we have no systems or comprehension to understand that. That's really the point where you ask, what's going to be my strategy as I grow and otherwise improve my business intelligence environment, which is morphing every year for most customers.
It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

That's the way that most people would start, with an existing problem and an objective or a goal inside the company. Generically, over time, the approach to answering it has been you buy a new technology from a new vendor who has a new silo, and you create a new data mart or data warehouse. But this is perpetuating the idea that technology will solve the problem. You end up with more technologies, more vendor tools, more staff, and more replicated data. We think this approach has become dated and inefficient.

But if, as an organization, you can comprehend that maybe there is some complexity that can be removed, while you're making an investment, then you free yourself to start thinking about how you can build a new architecture along the way. It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

So rather than asking somebody to re-architect and rip and replace their tool chain or the way they manage the information lifecycle, I would say you sort of lean into it in a way.

If you're really after a performance metric and you feel like there is a performance issue in an environment, at Dell we have a number of resources that actually benchmark and understand the performance and where bottlenecks are in systems.

So we can look at either application performance management issues, where we understand the application layer, or we have a very deep and qualified set of systems around databases and data warehouse performance to understand where bottlenecks are either in SQL language or elsewhere. There are a number of tools that we have to help identify where a bottleneck or issue might be from just a pure performance perspective as well.

Strategic position

Gardner: That might be a really good place to start -- just to learn where your performance issues are and then stake out your strategic position based on a payback for improving on your current infrastructure, but then setting the stage for new capabilities altogether.

Wolken: Sometimes there’s an issue occurring inside the database environment. Sometimes it's at the integration layer, because integration isn’t happening as well as you think. Sometimes it's at the data warehouse layer, because of the way the data model was set up. Whatever the case, we think there is value in understanding the earlier parts of the chain, because if they’re not performing well, the latter parts of the chain can’t perform either.

And so at each step, we've looked at how you ensure the performance of the data. How do you ensure the performance of the integration environment? How do you ensure the performance of the data warehouse as well? We think if each component of the tool chain in working as well as it should be, then that’s when you enable the entirety of your solution implementation to truly deliver value.
At each step, we've looked at how you ensure the performance of the data.

Gardner: Great. I'm afraid we we'll have to leave it there. We're about out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on better understanding the challenges businesses need to solve when it comes to improved data and information management.

And we have seen how organizations, not only need to manage all of their internal data that provides intelligence about the businesses, but also increasingly the reams of external data that enables them to improve on whole new business activities like discovering additional customers and driving new and additional revenue.

And we've learned more about how new levels of automation and precision can be applied to the task of solving data complexity and doing that to a tool chain of agnostic and capability.

I want to thank our guest. We have been here with Matt Wolken, Executive Director and General Manager for Information Management Software at Dell Software. Thanks so much, Matt.

Wolken: Thank you so much as well.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to our audience for joining us, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Dell Software.

Transcript of a BriefingsDirect podcast on how Dell Software is working with companies to manage internal and external data in all its forms. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Wednesday, June 06, 2012

Data Explosion and Big Data Demand New Strategies for Data Management, Backup and Recovery, Say Experts

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on why businesses need a better approach to their data recovery capabilities. We'll examine how major trends like virtualization, big data, and calls for comprehensive and automated data management, are driving the need for change.

The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environments and sidesteps the heightened real-time role that data now plays in enterprise.

What's needed are next-generation, integrated, and simplified approaches, the fast backup and recovery that spans all essential corporate data. The solution therefore means bridging legacy and new data, scaling to handle big data, implementing automation and governance, and integrating the functions of backup protection and DR.

The payoffs come in the form of quicker access to needed data and analytics, highly protected data across its lifecycle, ease in DR, and overall improved control and management of key assets, especially by non-specialized IT administrators.

To share insights into why data recovery needs a new approach and how that can be accomplished, we're joined by two experts, first John Maxwell, Vice President of Product Management for Data Protection at Quest Software. Welcome to the show, John. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

John Maxwell: Thank you. Glad to be here.

Gardner: We're also here with Jerome Wendt, President and Lead Analyst of DCIG, an independent storage analyst and consulting firm. Welcome, Jerome.

Jerome Wendt: Thank you, Dana. It's a pleasure to join the call.

Gardner: My first question to you, Jerome. I'm sensing a major shift in how companies view and value their data assets. Is data really a different thing than, say, five years ago in terms of how companies view it and value it?

Wendt: Absolutely. There's no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi-structured format, such as email, was where all the focus was. Clearly, in the last few years, we've seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data.

But we're also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area, but all you have to do is look around and see the growth.

When you go to any tech conference, you see iPads everywhere, and people are storing more data in the cloud. That's going to have an impact on how people and organizations manage their data and what they do with it going forward.

Gardner: John Maxwell, it seems that not that long ago, data was viewed as a byproduct of business. Now, for more and more companies, data is the business, or at least the analytics that they derive from it. Has this been a sea change, from your perspective?

Mission critical

Maxwell: It’s funny that you mention that, because I've been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what you're talking about, the shift and the importance of data.

Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. That's pretty profound, in that a company is saying half the data that we have, we can't live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.

Gardner: So we have a situation where more data is considered important, they need it faster, and they can't do without it. It’s as if our dependency on data has become heightened and ever-increasing. Is that a fair characteristic, Jerome?

Wendt: Absolutely.

Gardner: So given the requirement of having access to data and it being more important all the time, we're also seeing a lot of shifting on the infrastructure side of things. There's much more movement toward virtualization, whole new ways of storage when it comes to trying to reduce the overall cost of that, reducing duplication and that sort of business. How is the shift and the change in infrastructure impacting this simultaneous need for access and criticality? Let's start with you, John.

Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact of virtualization. This year, well over 50 percent of all the server images in the world are virtualized images, which is just phenomenal.

Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. We've seen that evolve from when VMs or virtual infrastructure were used more for test and dev. Today, I've seen studies that show that the shops that are virtualized are running SQL Server, Microsoft Exchange, very mission-critical apps.

We have some customers at Quest that are 100 percent virtualized. These are large organizations, not just some mom and pop company. That shift to virtualization has really made companies assess how they manage it, what tools they use, and their approaches. Virtualization has a large impact on storage and how you backup, protect, and restore data.

Gardner: John, it sounds like you're saying that it's an issue of complexity, but from a lot of the folks I speak to, when they get through it at the end of their journey through virtualization, they find that there are a lot of virtuous benefits to be extended across the data lifecycle. Is it the case that this is not all bad news, when it comes to virtualization?

Maxwell: No. Once you implement and have the proper tools in place, your virtual life is going to be a lot easier than your physical one from an IT infrastructure perspective. A lot of people initially moved to virtualization as a cost savings, because they had under-utilization of hardware. But one of the benefits of virtualization is the freedom, the dynamics. You can create a new VM in seconds. But then, of course, that creates things like VM sprawl, the amount of data continues to grow, and the like.

At Quest we've adapted and exploited a lot of the features that exist in virtual environments, but don't exist in physical environments. It’s actually easier to protect and recover virtual environments than it is physical, if you have tools that are exploiting the APIs and the infrastructure that exists in that virtual environment.

Significant benefits

Gardner: Jerome, do you concur that, when you are through the journey, when you are doing this correctly, that a virtualized environment gives you significant benefits when it comes to managing data from a lifecycle perspective?

Wendt: Yes, I do. One of the things I've clearly seen is that it really makes it more of a business enabler. We talk a lot these days about having different silos of data. One application creates data that stays over here. Then, it's backed up separately. Then, another application or another group creates data back over here.

Virtualization not only means consolidation and cost savings, but it also facilitates a more holistic view into the environment and how data is managed. Organizations are finally able to get their arms around the data that they have.

Get the free data protection and recovery white paper from IDC.

Before, it was so distributed that they didn't really have a good sense of where it resided or how to even make sense of it. With virtualization, there are initial cost benefits that help bring it altogether, but once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

Gardner: I suppose the key now is to be able to manage, automate, and bring the comprehensive control and governance to this equation, not just the virtualized workloads, but also of course the data that they're creating and bringing back into business processes.

Once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.



So what about that? What’s this other trend afoot? How do we move from sprawl to control and make this flip from being a complexity issue to a virtuous adoption and benefits issue? Let's start with you, John.

Maxwell: Over the years, people had very manual processes. For example, when you brought a new application online or added hardware, server, and that type of thing, you asked, "Oops, did we back it up? Are we backing that up?"

One thing that’s interesting in a virtual environment is that the backup software we have at Quest will automatically see when a new VM is created and start backing it up. So it doesn't matter if you have 20 or 200 or 2,000 VMs. We're going to make sure they're protected.

Where it really gets interesting is that you can protect the data a lot smarter than you can in a physical environment. I'll give you an example.

In a VMware environment, there are services that we can use to do a snapshot backup of a VM. In essence, it’s an immediate backup of all the data associated with that machine or those machines. It could be on any generic kind of hardware. You don’t need to have proprietary hardware or more expensive software features of high-end disk arrays. That is a feature that we can exploit built within the hypervisor itself.

Image backup


E
ven the way that we move data is much more efficient, because we have a process that we pioneered at Quest called "backup once, restore many," where we create what's called image backup. From that image backup I can restore an entire system, individual file, or an application. But I've done that from that one path, that one very effective snapshot-based backup.

If you look at physical environments, there is the concept of doing physical machine backups and file level backups, specific application backups, and for some systems, you even have to employ a hardware-based snapshots, or you actually had to bring the applications down.

So from that perspective, we've gotten much more sophisticated in virtual environments. Again, we're moving data by not impacting the applications themselves and not impacting the VMs. The way we move data is very fast and is very effective.

Gardner: Jerome, when we start to do these sorts of activities, whether we are backing up at very granular level or even thinking about mirroring entire data centers, how does governance, management, and automation come to play here? Is this something that couldn’t have been done in the physical domain?

Wendt: I don’t think it could have been done on the physical domain, at least not very easily. We do these buyer’s guides on a regular basis. So we have a chance to take in-depth looks at all these different backup software products on the market and how they're evolving.

There's much more awareness of what data is included in these data repositories and how they're searched.



One of the things we are really seeing, also to your point, is just a lot more intelligence going into this backup software. They're moving well beyond just “doing backups” any more. There's much more awareness of what data is included in these data repositories and how they're searched.

And also with more integration with platforms like vSphere Center, administrators can centrally manage backups, monitor backup jobs, and do recoveries. One person can do so much more than they could even a few years ago.

And really the expectation of organizations is evolving that they don’t want to necessarily want separate backup admin and system admin anymore. They want one team that manages their virtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern, manage, and execute on corporate objectives.

Gardner: I think it’s important to try to filter how this works than in terms of total cost. If you're adding, as you say, more intelligence to the process, if you don’t have separate administrators for each function, if you are able to provide a workflow approach to your data lifecycle, you have fewer duplications, you're using less total storage, you're able to support the requirements of the applications and so on. Is this really a case, John Maxwell, where we are getting more and paying less?

Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, the effectiveness of the software and what it can do is way beyond what we had 10 years ago.

Simplified process

Today, in a virtual environment, we can provide a solution that simplifies the process, where one person can ensure that hundreds of VMs are protected. They can literally right-click and restore a VM, a file, a directory, or an application.

One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consider email one of their most mission-critical applications. And the person responsible for protecting the environment that Microsoft Exchange is running on, may not be an Exchange administrator, but maybe they're tasked with being able to recover Exchange.

That’s why we've developed technologies that allow you to go out there, and from that one image backup, restore an email conversation or an attachment email from someone’s mailbox. That person doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, figure how to do this and make this available via a couple of mouse-clicks.

Gardner: So we're moving the administration app’s direction up, rather than app by app, server by server. We're really looking at it as the function of what you want to do with that data. That strikes me as a big deal. Is that a whole new thing that we're doing with data, Jerome?

Wendt: Yes, it is. As John was speaking, I was going to comment. I spoke to a Quest customer just a few weeks ago. He clearly had some very specific technical skills, but he's responsible for a lot of things, a lot of different functions -- server admin, storage admin, backup admin.

You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.



I think a lot of individuals can relate to this guy. I know I certainly did, because that was my role for many years, when I was an administrator in the police department. You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

In his particular case, he was called upon to do a recovery, and, to John’s point, it was an Exchange recovery. He never had any special training in Exchange recovery, but it just happened that he had Quest Software in place. He was able to use its FastRecover product to recover his Microsoft Exchange Server and had it back up and going in a few hours.

What was really amazing, in this particular case, is that he was traveling at the time it happened. So he had to talk to his manager through the process, and was able to get it up and going. Once he had the system up, he was able to log on and get it going fairly quickly.

That just illustrates how much the world has changed and how much backup software and these products have evolved to the point where you need to understand your environment, probably more than you need to understand the product, and just find the right product for your environment. In this case, this individual clearly accomplished that.

Gardner: It sounds like you're moving more to be an architect than a carpenter, right?

Wendt: Exactly.

Gardner: So we understand that management is great and oversight at that higher abstraction is going to get us a lot of benefits. But we mentioned earlier that some folks are at 20 percent virtualization, while others are at 90 percent. Some data is mission-critical, while other doesn't require the same diligence, and that's going to vary from company to company.

Hybrid model

S
o my question to you, John Maxwell, is how do organizations approach this being in a hybrid sort of a model, between physical and virtual, and recognizing that different apps have different criticality for their data, and that might change. How do we manage the change? How do we get from the old way of doing this into these newer benefits?

Maxwell: Well, there are two points. One, we can't have a bunch of niche tools, one for virtual, one for physical, and the like. That's why, with our vRanger product, which has been the market leader in virtual data protection for the past seven years, we're coming out with physical support in that product in the fall of 2012. Those customers are saying, "I want one product that handles that non-virtualized data."

The second part gets down to what percentage of your data is mission-critical and how complex it is, meaning is it email, or a database, or just a flat file, and then asking if these different types of data have specific service-level agreements (SLAs), and if you have products that can deliver on those SLAs.

That's why at Quest, we're really promoting a holistic approach to data protection that spans replication, continuous data protection, and more traditional backup, but backup mainly based on snapshots.

Then, that can map to the service level, to your business requirements. I just saw some data from an industry analyst that showed the replication software market is basically the same size now as the backup software market. That shows the desire for people to have kind of that real-time failover for some application, and you get that with replication.

We can't have a bunch of niche tools, one for virtual, one for physical, and the like.



When it comes to the example that Jerome gave with that customer, the Quest product that we're using is NetVault FastRecover, which is a continuous data protection product. It backs up everything in real-time. So you can go back to any point in time.

It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, or Oracle database. Yet, it's masking a lot of the complexity. So the person restoring it may not be a DBA. They're going to be that jack of all trades who's responsible for the storage and maybe backup overall.

Gardner: Jerome, what are you seeing in the field? Are there folks that are saying, "Okay, the value here is so compelling and we have such a mess, we're going to bite the bullet and just go totally virtual in three to sixth months. And, at least for our mission-critical apps, we're going to move them over into this data lifecycle approach for our recovery, backup, and DR?"

Or are you seeing companies that are saying, "Well, this is a five year plan and we're going to do this first and we are going to kind of string it out?" Which of these seems to be in vogue at the moment? What works, a bite the bullet, all or nothing, or the slow crawl-walk-run approach?

Wendt: It really depends on the size of the organization you're talking about. When I talk to small and medium size businesses (SMBs), 500-1,000 employees or fewer, they may have 100 terabyte of storage and may have 200 servers. I see them just biting the bullet. They're doing the three- to six-month approach. Let's make the conversion, do the complete switchover, and go virtual as much as possible.

Few legacy systems

Almost all of them have a few legacy systems. They may be running some application on Windows 2000 Server or some old version of AIX. Who knows what a lot of companies have running in the background? They can't just virtualize everything, but where they can, they get to a 98 percent virtualized environment.

When you start getting to enterprises, I see it a little bit different. It's more of a staged approach, because it just takes more coordination across the enterprise to make it all happen. There are a lot more logistics and planning going on.

I haven’t talked to too many that have taken five years to do it. It's mostly two to maybe four years at the outside range. But the move is to virtualize as much as possible, except for those legacy apps, which for some reason they can't tackle.

Gardner: John Maxwell, for those two classes of user, what does Quest suggest? Is there a path that you have for those who want to do it as rapidly as possible? And then is that metered approach also there in terms of how you support the journey?

Maxwell: It's funny that you mention the difference between SMB and the enterprise. I'm a firm believer that one size doesn’t fit all, which is why we have solutions for specific markets. We have solutions for the SMB along with enterprise solutions, but we do have a lot of commonality between the products. We're even developing for our SMB product a seamless upgrade path to our enterprise-class product.

One size doesn’t fit all, which is why we have solutions for specific markets.



Again, they're different markets, just as Jerome said. We found exactly what he just mentioned, which is the smaller accounts tend to be more homogenous and they tend to virtualize a lot more, whereas in the enterprise they're more heterogeneous and they may have a bigger mix of physical and virtual.

And they may have really more complex systems. That’s where you run into big data and more complex challenges, when it comes to how you can back data up and how you can recover it. And there are also different price points.

So our approach is to have solution specific to the SMB and specific to the enterprise. There is a lot of cross-functionality that exists in the products, but we're very crisp in our positioning, our go-to-market strategy, the price points, and the features, because one of the things you don’t want to do with SMB customers is overwhelm them.

Get the free data protection and recovery IDC white paper.

I meet hundreds of customers a year, and one of our top customers has an exabyte of data. Jerome, I don’t know if you talk to many customers that have exabyte, but I don’t really run into a lot of customers that have an exabyte of data. Their requirements are completely different than our average vRanger customer, who has around five terabytes of data.

We have products that are specific to the market segments, to specification or non-specification of that user, and at the price point. Yet, it's one vendor, one throat to choke, and there are paths for upgrade if you need to.

Gardner: John, in talking with Quest folks, I've heard them refer to a next-generation platform or approach, or a whole greater than the sum of the parts. How do you define next generation when it comes to data recovery in your view of the world?

New benefits

Maxwell: Well, without hyperbole, for us, our next generation is a new platform that we call NetVault Extended Architecture (XA), and this is a way to provide several benefits to our customers.

One is that with NetVault Extended Architecture we now are delivering a single user experience across products. So this gets into SMB-versus-enterprise for a customer that’s using maybe one of our point solutions for application or database recovery, providing that consistent look and feel, consistent approach. We have some customers that use multiple products. So with this, they now have a single pane of glass.

Also, it's important to offer a consistent means for administering and managing the backup and recovery process, because as we've been talking, why should a person have to have multiple skill sets? If you have one view, one console into data protection, that’s going to make your life a lot easier than have to learn a bunch of other types of solutions.

That’s the immediate benefit that I think people see. What NetVault Extended Architecture encompasses under the covers, though, is a really different approach in the industry, which is modularization of a lot of the components to backup and recovery and making them plug and play.

Let me give you an example. With the increase in virtualization a lot of people just equate virtualization with VMware. Well, we've got Hyper-V. We have initiatives from Red Hat. We have Xen, Oracle, and others. Jerome, I'm kind of curious about your views, but just as we saw in the 90s and in the 00s, with people having multiple platforms, whether it's Windows and Linux or Windows and Linux and, as you said, AIX, I believe we are going to start seeing multiple hypervisors.

It's important to offer a consistent means for administering and managing the backup and recovery process



So one of the approaches that NetVault Extended Architecture is going to bring us is a capability to offer a consistent approach to multiple hypervisors, meaning it could be a combination of VMware and Microsoft Hyper-V and maybe even KVM from Red Hat.

But, again, the administrator, the person who is managing the backup and recovery, doesn’t have to know any one of those platforms. That’s all hidden from them. In fact, if they want to restore data from one of those hypervisors, say restore a VMware as VMDK, which is their volume in VMware speak, into what's called a VHD and a Hyper-V, they could do that.

That, to me, is really exciting, because this is exploiting these new platforms and environments and providing tools that simplify the process. But that’s going to be one of the many benefits of our new NetVault Extended Architecture next generation, where we can provide that singular experience for our customer base to have a faster go-to-market, faster time to market, with new solutions, and be able to deliver in a modular approach.

Customers can choose what they need, whether they're an SMB customer, or one of the largest customers that we have with hundreds of petabytes or exabytes of data.

Wendt: I'd like to elaborate on what John just said. I'm really glad to hear that’s where Quest is going, John, I haven’t had a chance to discuss this with you guys, but DCIG has a lot of conversations with managed-service providers, and you'd be surprised, but there are actually very few that are VMware shops. I find the vast majority are actually either Microsoft Hyper-V or using Red Hat Linux as their platform, because they're looking for a cost-effective way to deliver virtualization in their environments.

We've seen this huge growth in replication, and people want to implement disaster recovery plans or business continuity planning. I think this ability to recover across different hypervisors is going to become absolutely critical, maybe not today or tomorrow, but I would say in the new few years. People are going to say, "Okay, now that we've got our environment virtualized, we can recover locally, but how about recovering into the cloud or with a cloud service provider? What options do we have there?"

More choice

If they're using VMware and their provider isn’t, they're almost forced to use VMware or something like this, whereas your platform gives them much more choice for managed service providers that are using platforms other than VMware. It sounds like Quest will really give them the ability to backup VMware hypervisors and then potentially recover into Red Hat or Microsoft Hyper-V at MSPs. So that could be a really exciting development for Quest in that area.

Gardner: So being able to support the complexity and the heterogeneity, whether it's at the application level, the platform level, the VM, and hypervisor level, all of that is part and parcel of extracting data recovery to the manage and architected level.

Do we have any examples, John, of companies that are already doing that? Are you are familiar with organizations -- maybe you can name them -- that are doing just that, managing a heterogeneity issue and coming up with some metrics of success for their data recovery and data management and lifecycle approach, as a result?

Maxwell: I'd like to give you an example of one customer, one of our European customers called CMC Markets. They use our entire NetVault family of products, both the core NetVault Backup product and the NetVault FastRecover product that Jerome mentioned.

They are a company where data is their lifeblood. They're an options trading company. They process tens of thousands of transactions a day. They have a distributed environment. They have their main data center in London, and that’s where their network operations center is. Yet, they have eight offices around the world.

One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it.



One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it. And the answer is no. With big data, there's no way that you will have enough time in a day to make that happen. With remote data, you want to put something that’s manual out in that remote office, where you're not going to have IT people.

CMC Markets has come to this approach of move data smarter, versus harder. They've implemented our NetVault FastRecover product, where it’s backed up to disk at their remote sites. Then, the product automatically replicates its backups to the home office in London.

Then, for some of their more mission-critical data in the London data center, databases such as SQL Server and Oracle, they do real-time backup. So they're able to recover the data at any point in time, literally within seconds. We have 17 patents on this product, most of them around a feature we call Flash Restore, that allows you to get an application up and running in less than 30 seconds.

But the real-life example is pretty interesting in that, one of their remote offices is in Tokyo. If you go back to March 11, 2011, when the 9.+ earthquakes happened, the tsunami, they lost power. They had damage to some of their server racks.

Since they were replicating in London and those backups were done locally in Tokyo, they actually got their employees up and running using Terminal Server, which enables the Tokyo employees to connect to the applications that had been recovered in London, because they had copies of those backups. So there was no disruption to their business.

Second problem


A
nd, as luck will have it, two weeks later, they had a problem at one of the other remote offices, where a server crashed, and then they were able to bring up data remotely. Then, they had another instance, where they had to just recover data. Because it was so quick, end-users didn’t even know that disk drive had crashed.

So I think that's a really neat example of a customer who is exploiting today’s technology. This gets back to the discussion we had earlier about service levels and managing of service levels in the business and making sure there's not a disruption of the business. If you're doing real-time trades in a stock exchange type of environment, you can't suffer any outages, because there's not only the monetary problems, but you don’t want to be in the cover of BBC.com.

Gardner: Of course regulation and compliance issues to consider.

Maxwell: Absolutely.

Gardner: We're getting towards the end of our time. Jerome, quickly, do you have any use cases or examples that you're familiar with that illustrate this concept of next-generation and lifecycle approach to data recovery that we have been discussing?

Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most of DCIG’s focus is just on analyzing the products themselves and comparing, traversing, and identifying general broader trends within those products.

Going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.



There are two things we're seeing. One, we're struggling calling backup software backup software anymore, because it does so much more than that. You mentioned earlier about so much more intelligence in these products. We call it backup software, because that’s the context in which everyone understands it, but I think going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

And then second, people, as they view backup and how they manage their infrastructure, really have to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobs that failed overnight." Those days are over. And if they're not over, you need to be looking for new products that will get you over that hump, because you should no longer be troubleshooting failed backup jobs.

You should be really looking more toward, how you can make sure all your environment is protected, recoverable, and really moving to the next phase of doing disaster recoveries and business continuity planning. The products are there. They are mature and people should be moving down that path.

Gardner: Jerome, we mentioned at the outset, mobile and the desire to deliver more data and applications to edge devices, and of course cloud was mentioned. People are going to be looking to take advantage of cloud efficiencies internally, but then also look to mixed-sourcing opportunities, hybrid-computing opportunities, different apps from different places, and the data lifecycle and backup that needs to be part and parcel with that.

We also mentioned the fact that big data is more important and that the timeframe of getting mission-critical data to the right people is shortening all the time. This all pulls together, for me, this notion that in the future you're not going to be able to do this any other way. This is not a luxury, but a necessity. Is that fair, Jerome?

Wendt: Yes, it is. That’s a fair assessment.

Crystal ball

Gardner: John, the same question to you basically. When we look into the crystal ball, even not that far out, it just seems that in order to manage what you need to do as a business, getting good control over your data, being able to ensure that it’s going to be available anytime, anywhere, regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just going to support the viability of the business.

Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud, because what's your control, as a business, over data that is hosted some place else?

I know that at Quest we use seven SaaS-based applications from various vendors, but what’s our guarantee that our data is protected there? I can tell you that a lot of these SaaS-based companies or hosting companies may offer an environment that says, "We're always up," or "We have a higher level of availability," but most recovery is based on logical corruption of data.

As I said, with some of these smaller vendors, you wonder about what if they went out of business, because I have heard stories of small service providers closing the doors, and you say, "But my data is there."

So the cloud is really exciting, in that we're looking at how we're going to protect assets that may be off-premise to your environment and how we can ensure that you can recover that data, in case that provider is not available.

Then there's something that Jerome touched upon, which is that the cloud is going to offer so many opportunities, the one that I am most excited about is using the cloud for failover. That really getting beyond recovery into business continuity.

Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could.



And something that has only been afforded by the largest enterprises, Global 1000 type customers, is the ability to have a stand up center, a SunGard or someone like that, which is very costly and not within reach of most customers. But with virtualization and with the cloud, there's a concept that I think we're going to see become very mainstream over the next five years, which is failover recovery to the cloud. That's something that’s going to be within reach of even SMB customers, and that’s really more of a business continuity message.

So now we're stepping up even more. We're now saying, "Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could."

Gardner: That sounds like a good topic for another day. I am afraid we are going to have to leave it there.

You've been listening to a sponsored BriefingsDirect podcast discussion on the value around next-generation, integrated and simplified approaches to fast backup and recovery. We have seen how a comprehensive approach to data recovery bridges legacy and new data, scales to handle big data, and provides automation and governance across the essential functions of backup, protection, and disaster recovery.

I'd like to thank our guests. We've been joined by John Maxwell, the Vice President of Product Management for Data Protection at Quest Software. Thanks so much, John.

Maxwell: Thank you.

Gardner: We've also been joined by Jerome Wendt, President and Lead Analyst at DCIG, an independent storage analyst and consulting firm. Thanks so much, Jerome.

Wendt: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to you, our audience, for listening, and come back next time.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: