Wednesday, June 06, 2012

Data Explosion and Big Data Demand New Strategies for Data Management, Backup and Recovery, Say Experts

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on why businesses need a better approach to their data recovery capabilities. We'll examine how major trends like virtualization, big data, and calls for comprehensive and automated data management, are driving the need for change.

The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environments and sidesteps the heightened real-time role that data now plays in enterprise.

What's needed are next-generation, integrated, and simplified approaches, the fast backup and recovery that spans all essential corporate data. The solution therefore means bridging legacy and new data, scaling to handle big data, implementing automation and governance, and integrating the functions of backup protection and DR.

The payoffs come in the form of quicker access to needed data and analytics, highly protected data across its lifecycle, ease in DR, and overall improved control and management of key assets, especially by non-specialized IT administrators.

To share insights into why data recovery needs a new approach and how that can be accomplished, we're joined by two experts, first John Maxwell, Vice President of Product Management for Data Protection at Quest Software. Welcome to the show, John. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

John Maxwell: Thank you. Glad to be here.

Gardner: We're also here with Jerome Wendt, President and Lead Analyst of DCIG, an independent storage analyst and consulting firm. Welcome, Jerome.

Jerome Wendt: Thank you, Dana. It's a pleasure to join the call.

Gardner: My first question to you, Jerome. I'm sensing a major shift in how companies view and value their data assets. Is data really a different thing than, say, five years ago in terms of how companies view it and value it?

Wendt: Absolutely. There's no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi-structured format, such as email, was where all the focus was. Clearly, in the last few years, we've seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data.

But we're also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area, but all you have to do is look around and see the growth.

When you go to any tech conference, you see iPads everywhere, and people are storing more data in the cloud. That's going to have an impact on how people and organizations manage their data and what they do with it going forward.

Gardner: John Maxwell, it seems that not that long ago, data was viewed as a byproduct of business. Now, for more and more companies, data is the business, or at least the analytics that they derive from it. Has this been a sea change, from your perspective?

Mission critical

Maxwell: It’s funny that you mention that, because I've been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what you're talking about, the shift and the importance of data.

Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. That's pretty profound, in that a company is saying half the data that we have, we can't live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.

Gardner: So we have a situation where more data is considered important, they need it faster, and they can't do without it. It’s as if our dependency on data has become heightened and ever-increasing. Is that a fair characteristic, Jerome?

Wendt: Absolutely.

Gardner: So given the requirement of having access to data and it being more important all the time, we're also seeing a lot of shifting on the infrastructure side of things. There's much more movement toward virtualization, whole new ways of storage when it comes to trying to reduce the overall cost of that, reducing duplication and that sort of business. How is the shift and the change in infrastructure impacting this simultaneous need for access and criticality? Let's start with you, John.

Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact of virtualization. This year, well over 50 percent of all the server images in the world are virtualized images, which is just phenomenal.

Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. We've seen that evolve from when VMs or virtual infrastructure were used more for test and dev. Today, I've seen studies that show that the shops that are virtualized are running SQL Server, Microsoft Exchange, very mission-critical apps.

We have some customers at Quest that are 100 percent virtualized. These are large organizations, not just some mom and pop company. That shift to virtualization has really made companies assess how they manage it, what tools they use, and their approaches. Virtualization has a large impact on storage and how you backup, protect, and restore data.

Gardner: John, it sounds like you're saying that it's an issue of complexity, but from a lot of the folks I speak to, when they get through it at the end of their journey through virtualization, they find that there are a lot of virtuous benefits to be extended across the data lifecycle. Is it the case that this is not all bad news, when it comes to virtualization?

Maxwell: No. Once you implement and have the proper tools in place, your virtual life is going to be a lot easier than your physical one from an IT infrastructure perspective. A lot of people initially moved to virtualization as a cost savings, because they had under-utilization of hardware. But one of the benefits of virtualization is the freedom, the dynamics. You can create a new VM in seconds. But then, of course, that creates things like VM sprawl, the amount of data continues to grow, and the like.

At Quest we've adapted and exploited a lot of the features that exist in virtual environments, but don't exist in physical environments. It’s actually easier to protect and recover virtual environments than it is physical, if you have tools that are exploiting the APIs and the infrastructure that exists in that virtual environment.

Significant benefits

Gardner: Jerome, do you concur that, when you are through the journey, when you are doing this correctly, that a virtualized environment gives you significant benefits when it comes to managing data from a lifecycle perspective?

Wendt: Yes, I do. One of the things I've clearly seen is that it really makes it more of a business enabler. We talk a lot these days about having different silos of data. One application creates data that stays over here. Then, it's backed up separately. Then, another application or another group creates data back over here.

Virtualization not only means consolidation and cost savings, but it also facilitates a more holistic view into the environment and how data is managed. Organizations are finally able to get their arms around the data that they have.

Get the free data protection and recovery white paper from IDC.

Before, it was so distributed that they didn't really have a good sense of where it resided or how to even make sense of it. With virtualization, there are initial cost benefits that help bring it altogether, but once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

Gardner: I suppose the key now is to be able to manage, automate, and bring the comprehensive control and governance to this equation, not just the virtualized workloads, but also of course the data that they're creating and bringing back into business processes.

Once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.



So what about that? What’s this other trend afoot? How do we move from sprawl to control and make this flip from being a complexity issue to a virtuous adoption and benefits issue? Let's start with you, John.

Maxwell: Over the years, people had very manual processes. For example, when you brought a new application online or added hardware, server, and that type of thing, you asked, "Oops, did we back it up? Are we backing that up?"

One thing that’s interesting in a virtual environment is that the backup software we have at Quest will automatically see when a new VM is created and start backing it up. So it doesn't matter if you have 20 or 200 or 2,000 VMs. We're going to make sure they're protected.

Where it really gets interesting is that you can protect the data a lot smarter than you can in a physical environment. I'll give you an example.

In a VMware environment, there are services that we can use to do a snapshot backup of a VM. In essence, it’s an immediate backup of all the data associated with that machine or those machines. It could be on any generic kind of hardware. You don’t need to have proprietary hardware or more expensive software features of high-end disk arrays. That is a feature that we can exploit built within the hypervisor itself.

Image backup


E
ven the way that we move data is much more efficient, because we have a process that we pioneered at Quest called "backup once, restore many," where we create what's called image backup. From that image backup I can restore an entire system, individual file, or an application. But I've done that from that one path, that one very effective snapshot-based backup.

If you look at physical environments, there is the concept of doing physical machine backups and file level backups, specific application backups, and for some systems, you even have to employ a hardware-based snapshots, or you actually had to bring the applications down.

So from that perspective, we've gotten much more sophisticated in virtual environments. Again, we're moving data by not impacting the applications themselves and not impacting the VMs. The way we move data is very fast and is very effective.

Gardner: Jerome, when we start to do these sorts of activities, whether we are backing up at very granular level or even thinking about mirroring entire data centers, how does governance, management, and automation come to play here? Is this something that couldn’t have been done in the physical domain?

Wendt: I don’t think it could have been done on the physical domain, at least not very easily. We do these buyer’s guides on a regular basis. So we have a chance to take in-depth looks at all these different backup software products on the market and how they're evolving.

There's much more awareness of what data is included in these data repositories and how they're searched.



One of the things we are really seeing, also to your point, is just a lot more intelligence going into this backup software. They're moving well beyond just “doing backups” any more. There's much more awareness of what data is included in these data repositories and how they're searched.

And also with more integration with platforms like vSphere Center, administrators can centrally manage backups, monitor backup jobs, and do recoveries. One person can do so much more than they could even a few years ago.

And really the expectation of organizations is evolving that they don’t want to necessarily want separate backup admin and system admin anymore. They want one team that manages their virtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern, manage, and execute on corporate objectives.

Gardner: I think it’s important to try to filter how this works than in terms of total cost. If you're adding, as you say, more intelligence to the process, if you don’t have separate administrators for each function, if you are able to provide a workflow approach to your data lifecycle, you have fewer duplications, you're using less total storage, you're able to support the requirements of the applications and so on. Is this really a case, John Maxwell, where we are getting more and paying less?

Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, the effectiveness of the software and what it can do is way beyond what we had 10 years ago.

Simplified process

Today, in a virtual environment, we can provide a solution that simplifies the process, where one person can ensure that hundreds of VMs are protected. They can literally right-click and restore a VM, a file, a directory, or an application.

One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consider email one of their most mission-critical applications. And the person responsible for protecting the environment that Microsoft Exchange is running on, may not be an Exchange administrator, but maybe they're tasked with being able to recover Exchange.

That’s why we've developed technologies that allow you to go out there, and from that one image backup, restore an email conversation or an attachment email from someone’s mailbox. That person doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, figure how to do this and make this available via a couple of mouse-clicks.

Gardner: So we're moving the administration app’s direction up, rather than app by app, server by server. We're really looking at it as the function of what you want to do with that data. That strikes me as a big deal. Is that a whole new thing that we're doing with data, Jerome?

Wendt: Yes, it is. As John was speaking, I was going to comment. I spoke to a Quest customer just a few weeks ago. He clearly had some very specific technical skills, but he's responsible for a lot of things, a lot of different functions -- server admin, storage admin, backup admin.

You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.



I think a lot of individuals can relate to this guy. I know I certainly did, because that was my role for many years, when I was an administrator in the police department. You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

In his particular case, he was called upon to do a recovery, and, to John’s point, it was an Exchange recovery. He never had any special training in Exchange recovery, but it just happened that he had Quest Software in place. He was able to use its FastRecover product to recover his Microsoft Exchange Server and had it back up and going in a few hours.

What was really amazing, in this particular case, is that he was traveling at the time it happened. So he had to talk to his manager through the process, and was able to get it up and going. Once he had the system up, he was able to log on and get it going fairly quickly.

That just illustrates how much the world has changed and how much backup software and these products have evolved to the point where you need to understand your environment, probably more than you need to understand the product, and just find the right product for your environment. In this case, this individual clearly accomplished that.

Gardner: It sounds like you're moving more to be an architect than a carpenter, right?

Wendt: Exactly.

Gardner: So we understand that management is great and oversight at that higher abstraction is going to get us a lot of benefits. But we mentioned earlier that some folks are at 20 percent virtualization, while others are at 90 percent. Some data is mission-critical, while other doesn't require the same diligence, and that's going to vary from company to company.

Hybrid model

S
o my question to you, John Maxwell, is how do organizations approach this being in a hybrid sort of a model, between physical and virtual, and recognizing that different apps have different criticality for their data, and that might change. How do we manage the change? How do we get from the old way of doing this into these newer benefits?

Maxwell: Well, there are two points. One, we can't have a bunch of niche tools, one for virtual, one for physical, and the like. That's why, with our vRanger product, which has been the market leader in virtual data protection for the past seven years, we're coming out with physical support in that product in the fall of 2012. Those customers are saying, "I want one product that handles that non-virtualized data."

The second part gets down to what percentage of your data is mission-critical and how complex it is, meaning is it email, or a database, or just a flat file, and then asking if these different types of data have specific service-level agreements (SLAs), and if you have products that can deliver on those SLAs.

That's why at Quest, we're really promoting a holistic approach to data protection that spans replication, continuous data protection, and more traditional backup, but backup mainly based on snapshots.

Then, that can map to the service level, to your business requirements. I just saw some data from an industry analyst that showed the replication software market is basically the same size now as the backup software market. That shows the desire for people to have kind of that real-time failover for some application, and you get that with replication.

We can't have a bunch of niche tools, one for virtual, one for physical, and the like.



When it comes to the example that Jerome gave with that customer, the Quest product that we're using is NetVault FastRecover, which is a continuous data protection product. It backs up everything in real-time. So you can go back to any point in time.

It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, or Oracle database. Yet, it's masking a lot of the complexity. So the person restoring it may not be a DBA. They're going to be that jack of all trades who's responsible for the storage and maybe backup overall.

Gardner: Jerome, what are you seeing in the field? Are there folks that are saying, "Okay, the value here is so compelling and we have such a mess, we're going to bite the bullet and just go totally virtual in three to sixth months. And, at least for our mission-critical apps, we're going to move them over into this data lifecycle approach for our recovery, backup, and DR?"

Or are you seeing companies that are saying, "Well, this is a five year plan and we're going to do this first and we are going to kind of string it out?" Which of these seems to be in vogue at the moment? What works, a bite the bullet, all or nothing, or the slow crawl-walk-run approach?

Wendt: It really depends on the size of the organization you're talking about. When I talk to small and medium size businesses (SMBs), 500-1,000 employees or fewer, they may have 100 terabyte of storage and may have 200 servers. I see them just biting the bullet. They're doing the three- to six-month approach. Let's make the conversion, do the complete switchover, and go virtual as much as possible.

Few legacy systems

Almost all of them have a few legacy systems. They may be running some application on Windows 2000 Server or some old version of AIX. Who knows what a lot of companies have running in the background? They can't just virtualize everything, but where they can, they get to a 98 percent virtualized environment.

When you start getting to enterprises, I see it a little bit different. It's more of a staged approach, because it just takes more coordination across the enterprise to make it all happen. There are a lot more logistics and planning going on.

I haven’t talked to too many that have taken five years to do it. It's mostly two to maybe four years at the outside range. But the move is to virtualize as much as possible, except for those legacy apps, which for some reason they can't tackle.

Gardner: John Maxwell, for those two classes of user, what does Quest suggest? Is there a path that you have for those who want to do it as rapidly as possible? And then is that metered approach also there in terms of how you support the journey?

Maxwell: It's funny that you mention the difference between SMB and the enterprise. I'm a firm believer that one size doesn’t fit all, which is why we have solutions for specific markets. We have solutions for the SMB along with enterprise solutions, but we do have a lot of commonality between the products. We're even developing for our SMB product a seamless upgrade path to our enterprise-class product.

One size doesn’t fit all, which is why we have solutions for specific markets.



Again, they're different markets, just as Jerome said. We found exactly what he just mentioned, which is the smaller accounts tend to be more homogenous and they tend to virtualize a lot more, whereas in the enterprise they're more heterogeneous and they may have a bigger mix of physical and virtual.

And they may have really more complex systems. That’s where you run into big data and more complex challenges, when it comes to how you can back data up and how you can recover it. And there are also different price points.

So our approach is to have solution specific to the SMB and specific to the enterprise. There is a lot of cross-functionality that exists in the products, but we're very crisp in our positioning, our go-to-market strategy, the price points, and the features, because one of the things you don’t want to do with SMB customers is overwhelm them.

Get the free data protection and recovery IDC white paper.

I meet hundreds of customers a year, and one of our top customers has an exabyte of data. Jerome, I don’t know if you talk to many customers that have exabyte, but I don’t really run into a lot of customers that have an exabyte of data. Their requirements are completely different than our average vRanger customer, who has around five terabytes of data.

We have products that are specific to the market segments, to specification or non-specification of that user, and at the price point. Yet, it's one vendor, one throat to choke, and there are paths for upgrade if you need to.

Gardner: John, in talking with Quest folks, I've heard them refer to a next-generation platform or approach, or a whole greater than the sum of the parts. How do you define next generation when it comes to data recovery in your view of the world?

New benefits

Maxwell: Well, without hyperbole, for us, our next generation is a new platform that we call NetVault Extended Architecture (XA), and this is a way to provide several benefits to our customers.

One is that with NetVault Extended Architecture we now are delivering a single user experience across products. So this gets into SMB-versus-enterprise for a customer that’s using maybe one of our point solutions for application or database recovery, providing that consistent look and feel, consistent approach. We have some customers that use multiple products. So with this, they now have a single pane of glass.

Also, it's important to offer a consistent means for administering and managing the backup and recovery process, because as we've been talking, why should a person have to have multiple skill sets? If you have one view, one console into data protection, that’s going to make your life a lot easier than have to learn a bunch of other types of solutions.

That’s the immediate benefit that I think people see. What NetVault Extended Architecture encompasses under the covers, though, is a really different approach in the industry, which is modularization of a lot of the components to backup and recovery and making them plug and play.

Let me give you an example. With the increase in virtualization a lot of people just equate virtualization with VMware. Well, we've got Hyper-V. We have initiatives from Red Hat. We have Xen, Oracle, and others. Jerome, I'm kind of curious about your views, but just as we saw in the 90s and in the 00s, with people having multiple platforms, whether it's Windows and Linux or Windows and Linux and, as you said, AIX, I believe we are going to start seeing multiple hypervisors.

It's important to offer a consistent means for administering and managing the backup and recovery process



So one of the approaches that NetVault Extended Architecture is going to bring us is a capability to offer a consistent approach to multiple hypervisors, meaning it could be a combination of VMware and Microsoft Hyper-V and maybe even KVM from Red Hat.

But, again, the administrator, the person who is managing the backup and recovery, doesn’t have to know any one of those platforms. That’s all hidden from them. In fact, if they want to restore data from one of those hypervisors, say restore a VMware as VMDK, which is their volume in VMware speak, into what's called a VHD and a Hyper-V, they could do that.

That, to me, is really exciting, because this is exploiting these new platforms and environments and providing tools that simplify the process. But that’s going to be one of the many benefits of our new NetVault Extended Architecture next generation, where we can provide that singular experience for our customer base to have a faster go-to-market, faster time to market, with new solutions, and be able to deliver in a modular approach.

Customers can choose what they need, whether they're an SMB customer, or one of the largest customers that we have with hundreds of petabytes or exabytes of data.

Wendt: I'd like to elaborate on what John just said. I'm really glad to hear that’s where Quest is going, John, I haven’t had a chance to discuss this with you guys, but DCIG has a lot of conversations with managed-service providers, and you'd be surprised, but there are actually very few that are VMware shops. I find the vast majority are actually either Microsoft Hyper-V or using Red Hat Linux as their platform, because they're looking for a cost-effective way to deliver virtualization in their environments.

We've seen this huge growth in replication, and people want to implement disaster recovery plans or business continuity planning. I think this ability to recover across different hypervisors is going to become absolutely critical, maybe not today or tomorrow, but I would say in the new few years. People are going to say, "Okay, now that we've got our environment virtualized, we can recover locally, but how about recovering into the cloud or with a cloud service provider? What options do we have there?"

More choice

If they're using VMware and their provider isn’t, they're almost forced to use VMware or something like this, whereas your platform gives them much more choice for managed service providers that are using platforms other than VMware. It sounds like Quest will really give them the ability to backup VMware hypervisors and then potentially recover into Red Hat or Microsoft Hyper-V at MSPs. So that could be a really exciting development for Quest in that area.

Gardner: So being able to support the complexity and the heterogeneity, whether it's at the application level, the platform level, the VM, and hypervisor level, all of that is part and parcel of extracting data recovery to the manage and architected level.

Do we have any examples, John, of companies that are already doing that? Are you are familiar with organizations -- maybe you can name them -- that are doing just that, managing a heterogeneity issue and coming up with some metrics of success for their data recovery and data management and lifecycle approach, as a result?

Maxwell: I'd like to give you an example of one customer, one of our European customers called CMC Markets. They use our entire NetVault family of products, both the core NetVault Backup product and the NetVault FastRecover product that Jerome mentioned.

They are a company where data is their lifeblood. They're an options trading company. They process tens of thousands of transactions a day. They have a distributed environment. They have their main data center in London, and that’s where their network operations center is. Yet, they have eight offices around the world.

One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it.



One of the challenges of having remote data and/or big data is whether you can really use traditional backup to do it. And the answer is no. With big data, there's no way that you will have enough time in a day to make that happen. With remote data, you want to put something that’s manual out in that remote office, where you're not going to have IT people.

CMC Markets has come to this approach of move data smarter, versus harder. They've implemented our NetVault FastRecover product, where it’s backed up to disk at their remote sites. Then, the product automatically replicates its backups to the home office in London.

Then, for some of their more mission-critical data in the London data center, databases such as SQL Server and Oracle, they do real-time backup. So they're able to recover the data at any point in time, literally within seconds. We have 17 patents on this product, most of them around a feature we call Flash Restore, that allows you to get an application up and running in less than 30 seconds.

But the real-life example is pretty interesting in that, one of their remote offices is in Tokyo. If you go back to March 11, 2011, when the 9.+ earthquakes happened, the tsunami, they lost power. They had damage to some of their server racks.

Since they were replicating in London and those backups were done locally in Tokyo, they actually got their employees up and running using Terminal Server, which enables the Tokyo employees to connect to the applications that had been recovered in London, because they had copies of those backups. So there was no disruption to their business.

Second problem


A
nd, as luck will have it, two weeks later, they had a problem at one of the other remote offices, where a server crashed, and then they were able to bring up data remotely. Then, they had another instance, where they had to just recover data. Because it was so quick, end-users didn’t even know that disk drive had crashed.

So I think that's a really neat example of a customer who is exploiting today’s technology. This gets back to the discussion we had earlier about service levels and managing of service levels in the business and making sure there's not a disruption of the business. If you're doing real-time trades in a stock exchange type of environment, you can't suffer any outages, because there's not only the monetary problems, but you don’t want to be in the cover of BBC.com.

Gardner: Of course regulation and compliance issues to consider.

Maxwell: Absolutely.

Gardner: We're getting towards the end of our time. Jerome, quickly, do you have any use cases or examples that you're familiar with that illustrate this concept of next-generation and lifecycle approach to data recovery that we have been discussing?

Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most of DCIG’s focus is just on analyzing the products themselves and comparing, traversing, and identifying general broader trends within those products.

Going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.



There are two things we're seeing. One, we're struggling calling backup software backup software anymore, because it does so much more than that. You mentioned earlier about so much more intelligence in these products. We call it backup software, because that’s the context in which everyone understands it, but I think going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

And then second, people, as they view backup and how they manage their infrastructure, really have to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobs that failed overnight." Those days are over. And if they're not over, you need to be looking for new products that will get you over that hump, because you should no longer be troubleshooting failed backup jobs.

You should be really looking more toward, how you can make sure all your environment is protected, recoverable, and really moving to the next phase of doing disaster recoveries and business continuity planning. The products are there. They are mature and people should be moving down that path.

Gardner: Jerome, we mentioned at the outset, mobile and the desire to deliver more data and applications to edge devices, and of course cloud was mentioned. People are going to be looking to take advantage of cloud efficiencies internally, but then also look to mixed-sourcing opportunities, hybrid-computing opportunities, different apps from different places, and the data lifecycle and backup that needs to be part and parcel with that.

We also mentioned the fact that big data is more important and that the timeframe of getting mission-critical data to the right people is shortening all the time. This all pulls together, for me, this notion that in the future you're not going to be able to do this any other way. This is not a luxury, but a necessity. Is that fair, Jerome?

Wendt: Yes, it is. That’s a fair assessment.

Crystal ball

Gardner: John, the same question to you basically. When we look into the crystal ball, even not that far out, it just seems that in order to manage what you need to do as a business, getting good control over your data, being able to ensure that it’s going to be available anytime, anywhere, regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just going to support the viability of the business.

Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud, because what's your control, as a business, over data that is hosted some place else?

I know that at Quest we use seven SaaS-based applications from various vendors, but what’s our guarantee that our data is protected there? I can tell you that a lot of these SaaS-based companies or hosting companies may offer an environment that says, "We're always up," or "We have a higher level of availability," but most recovery is based on logical corruption of data.

As I said, with some of these smaller vendors, you wonder about what if they went out of business, because I have heard stories of small service providers closing the doors, and you say, "But my data is there."

So the cloud is really exciting, in that we're looking at how we're going to protect assets that may be off-premise to your environment and how we can ensure that you can recover that data, in case that provider is not available.

Then there's something that Jerome touched upon, which is that the cloud is going to offer so many opportunities, the one that I am most excited about is using the cloud for failover. That really getting beyond recovery into business continuity.

Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could.



And something that has only been afforded by the largest enterprises, Global 1000 type customers, is the ability to have a stand up center, a SunGard or someone like that, which is very costly and not within reach of most customers. But with virtualization and with the cloud, there's a concept that I think we're going to see become very mainstream over the next five years, which is failover recovery to the cloud. That's something that’s going to be within reach of even SMB customers, and that’s really more of a business continuity message.

So now we're stepping up even more. We're now saying, "Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could."

Gardner: That sounds like a good topic for another day. I am afraid we are going to have to leave it there.

You've been listening to a sponsored BriefingsDirect podcast discussion on the value around next-generation, integrated and simplified approaches to fast backup and recovery. We have seen how a comprehensive approach to data recovery bridges legacy and new data, scales to handle big data, and provides automation and governance across the essential functions of backup, protection, and disaster recovery.

I'd like to thank our guests. We've been joined by John Maxwell, the Vice President of Product Management for Data Protection at Quest Software. Thanks so much, John.

Maxwell: Thank you.

Gardner: We've also been joined by Jerome Wendt, President and Lead Analyst at DCIG, an independent storage analyst and consulting firm. Thanks so much, Jerome.

Wendt: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to you, our audience, for listening, and come back next time.

Get the free data protection and recovery white paper.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Quest Software.

Transcript of a sponsored BriefingDirect podcast on how data-recovery products can provide quicker access to data and analysis. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, June 05, 2012

Corporate Data, Supply Chains Remain Vulnerable to Cyber Crime Attacks, Says Open Group Conference Speaker

Transcript of a BriefingsDirect podcast in which cyber security expert Joel Brenner explains the risk to businesses from international electronic espionage.

Register for The Open Group Conference
July 16-18 in Washington, D.C.


Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with the Open Group Conference this July in Washington, D.C. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout these discussions.

The conference will focus on how security impacts the enterprise architecture, enterprise transformation, and global supply chain activities in organizations, both large and small. Today, we're here on the security front with one of the main speakers at the July 16 conference, Joel Brenner, the author of "America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare."

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Joel, welcome to BriefingsDirect.

Joel Brenner: Thanks. I'm glad to be here.

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

My question is why wouldn't these same countries be also very adept on the offense, being highly technical? Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that's gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we've seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they're not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial


We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They're good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don't do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you're stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we're real good at.

Gardner: This raises the question for me. If we're hyper-capitalist, where we have aggressive business practices and we have these very valuable assets to protect, isn't there the opportunity to take the technology and thwart the advances from these other places? Wouldn’t our defense rise to the occasion? Why hasn't it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late '60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.

The people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it.



It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department's own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it. They thought about it. But it wasn't going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it's Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we're in.

Both directions


Gardner: So the Swiss cheese would work in both directions. U.S. corporations, if they were interested, could use the same techniques and approaches to go into companies in China or Russia or Iran, as you pointed out, but they don't have assets that we’re interested in. So we’re uniquely vulnerable in that regard.

Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It's not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia -- or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.

The supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements.



The other problem is the intentional hooking, or compromising, of software or chips to do things that they're not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won't do these squirrelly things. We can test that stuff to make sure it will do what it's supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don't want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can't test it for everything. It’s just impossible. In hardware and software, it is the strategic supply chain problem now. That's why we have it.

Gardner: So as organizations ramp up their security, as they look towards making their own networks more impervious to attack, their data isolated, their applications isolated, they still have to worry about all of the other components and services that come into play, particularly software. [Registration remains open for The Open Group Conference in Washington, DC beginning July 16.]

Brenner: If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that -- that's impossible -- but to make it really harder to attack your supply chain.

Notion of cost

Gardner: Well, Joel, it sounds like we also need to reevaluate the notion of cost. So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren't factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It's very hard to be quantitatively persuasive about that. That's one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you're in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things. If you’re making titanium screws for orthopedic operations, for example, and you’re making them in -- I don’t want to name any country, but let’s just say a country across the Pacific Ocean with a whole lot of people in it -- you could have significant counterfeit problems there.

Explaining to somebody that the screw you just put through his spine is really not what it’s supposed to be and you have to have another operation to take it out and put in another one is not a risk a lot of people want to run.

So even in things like that, which don't involve electronics, you have significant supply-chain management issues. It’s worldwide. I don’t want to suggest this is a problem just with China. That would be unfair.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things



Gardner: Right. We’ve seen other aspects of commerce in which we can't lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices. Is it practical to look at some of these issues under the business lens and say, "If we do that, will it deter people from doing it again?"

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We've created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there's a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don't know. I'm not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders


Brenner: It depends on the borders you're talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn't say that was the case in Nigeria. You wouldn't say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there's no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we're going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they've evolved -- even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for -- stuff we've already seen. That’s a very poor strategy for doing security, but that's where we are. It hasn’t changed much in quite a long time and it's probably not going to.

We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection.



Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: So it's a notion of public and private.

Brenner: You can say public and private. That doesn’t change the nature of the problem. It won’t happen all at once. We're not going to abandon the Internet. That would be crazy. Everything depends on it, and you can’t do that. It’d be a fairy tale to think of it. But it’s going to happen gradually, and there is research going on into that sort of thing right now. It’s also a big political issue.

Gardner: Let’s take a slightly different tack on this. We’ve seen a lot with cloud computing and more businesses starting to go to third-party cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there's a limited lumber, or at least a finite number, of cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is yes. The SMBs will achieve greater security by basically contracting it out to what are called cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the cloud. Are you going to rely on your cloud provider to provide the backup? Are you going to rely on the cloud provider to provide all of your backup? Are you going to go to a second cloud provider? Are you going to keep some information copied in-house?

By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.



What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that's another aspect of security people need to think through.

Gardner: We’re almost out of time, Joel, but I wanted to get into this sense of metrics, measurement of success or failure. How do you know you’re doing the right thing? How do you know that you're protecting? How do you know that you've gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it's not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you've still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling -- and that’s your stuff. Then maybe you go back and realize, "Oh, that incident three or four years ago, maybe that's when that happened, maybe that’s when I lost it."

What's going out

S
o you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they're not looking at what's going out.

That's one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can't quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don't know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don't make it, they’ll fall behind and they'll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don't have a satisfactory answer to your question, and nobody else does either. This is one where we can't quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?

People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.



Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don't see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we're having our pockets picked and it's time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, “America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn't a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I'm sorry to say.

Gardner: Very good. I'm afraid we'll have to leave it there. We’ve been talking with Joel Brenner, the author of “America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare.” And as a lead into his Open Group presentation on July 16 on cyber security, Joel and I have been exploring the current cybercrime landscape and what can be done to better understand the threat and work against it.

This special BriefingsDirect discussion comes to you in conjunction with the Open Group Conference from July 16 to 20 in Washington, D.C. We’ll hear more from Joel and others at the conference on ways that security and supply chain management can be improved. I want to thank you, Joel, for joining us. It’s been a fascinating discussion.

Brenner: Pleasure. Thanks for having me.

Gardner: I’d certainly look forward to your presentation in Washington. I encourage our readers and listeners to attend the conference to learn more. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for these thought leadership interviews. Thanks again for listening and come back next time.

Register for The Open Group Conference
July 16-18 in Washington, D.C.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast in which cyber security expert Joel Brenner explains the risk to businesses from international electronic espionage. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Friday, May 11, 2012

Investing Well in IT With Emphasis on KPIs Separates Business Leaders from Business Laggards, Survey Results Show

Transcript of a sponsored BriefingsDirect podcast on the results of a survey that show that innovation focusing on information and KPIs drives substantial positive business results.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on some fascinating new findings from a recent survey on chief information officer (CIO)-level priorities. We'll uncover what distinguishes leaders from laggards among businesses, and identify which IT approaches and solutions are driving the most powerful business results these days.

To help dig into the survey, explain what it means, and learn how these results can lead to establishing winning new IT strategies we're joined by Joel Dobbs, President and CEO of Compass Talent Management Group. He's also an Executive in Residence at the School of Business at the University of Alabama at Birmingham (UAB), and a lead blogger and member of the Enterprise CIO Forum.

What’s more, Joel is a retired CIO himself, coming from such organizations as GlaxoWellcome, Schering-Plough, and Eisai. Welcome to our discussion, Joel.

Joel Dobbs: Thank you very much.

Gardner: We're also here with Daniel Dorr, a Worldwide Solutions Manager for HP Enterprise Marketing. Welcome to you, Daniel. [Disclosure: HP is a sponsor of BriefingsDirect podcasts].

Daniel Dorr: Thank you, Dana.

Gardner: Let’s start with just an overview of what we wanted to accomplish. Daniel, what was the idea behind doing this survey at this time?

Dorr: Dana, a lot of companies talk about how important technology is, and we all represent our technology as the right answer to the problem. But if our job is to help our CIO clients better use technology to solve business results -- and if our job is to help our CIOs work more effectively with their executive committees and CEOs -- the best way for us to help them is to determine which technologies actually change or correlate with in-market results.

In other words, if we look at revenue leaders in-market, which technology seems to be most closely associated with those who lead in-market performance? It's not technology for technology’s sake, or because it’s exciting or new -- but technology that actually seems to represent business results.

So our goal here was to help our clients do a better job of assessing which technologies lead to in-market business results and which technologies might not.

Gardner: Well, this has been a hot topic for decades, trying to establish the link between technology practices and business results.

Joel, you've been in the trenches as a CIO. You're now involved with the academic view of this at UAB and doing some IT talent work. When you reviewed these results, was there anything that jumped out, that was perhaps something new or interesting?

Not much separation


Dobbs: There were a couple of things that surprised me, one of which is how close statistically the leaders and laggards were in some areas. There was not as much separation in some of the areas that were looked at as I would have expected.

The other thing that surprised me is one of the areas that gets an awful lot of play in the press now -- this whole idea of bring-your-own-device (BYOD) policies for employees. For the most part, this seemed to have been a non-issue for most of these folks. This suggests that either this has not taken hold as much as we would be led to believe, or that companies have just basically decided they're not going to tackle this battle now and will save that for another day.

Gardner: Tell us a little bit, Daniel, about this survey. When did it happen? Who was targeted? HP was involved. Maybe you can tell us how, and then who conducted it?

Dorr: We wanted to understand the difference between market leaders, from a revenue perspective, and market laggards or followers, and see what their IT environments looked like. We surveyed 688 organizations. We spoke to IT decision makers, so we would call that "CIO minus one." We didn’t speak to the CIO directly. We spoke to the people that reported to him or her.

Everyone that we spoke to had to have significant knowledge about applications, information, data center operation, security, and cloud. The survey was conducted over nine different geographies: the US, Brazil, Mexico, UK, Germany, France, Japan, China, Australia, and covered a number of different industry groups.

This was not a public survey. In other words, the people responding didn't know the survey was coming from HP. It was a blind survey. We asked over 55 different questions around areas of application, security, information, cloud, etc. to understand which attributes were most strongly correlated with in-market or revenue performance, and those that weren't.

The questions we were trying to answer were what do market leaders do versus followers? How do industry leaders differ from followers? Is there a difference depending on the region or the market or the industry? And where do IT decision makers focus on a day-to-day level, versus the more CIO strategic forward two-year thinking level?

Gardner: Just to be clear, the surveys were delivered and answered in the last few months of 2011. Is that right?

Dorr: Exactly right. The results came into us in December 2011. So this is pretty accurate and up-to-date data.

Gardner: How about some of the top findings? Were there some nuts and bolts issues here around workload, automation, server capacity, things like that, that we could look to and just get a sense of who these organizations are and where they are on their journey toward a better IT outcome?

In search of priorities

Dorr: We asked more than 50 questions to understand from organizations where their priorities were and what they were doing today and then we compared that to their in-market performance. And I would say the answers fell into three buckets: They were around infrastructure issues, information and information management, and people and processes.

On the infrastructure side of the equation, we asked a number of questions, but the ones that rose to the top in terms of driving in-market or correlation between revenue performance were probably three or four. A lot of it had to do with application modernization and security, when it came to the infrastructure side of the equation.

For example, market leaders tended to have fewer custom applications and fewer legacy applications. They tended to use their server capacity more efficiently than their peers. Those were some of the big ones around the infrastructure side of equation.

With security, the market leaders tended to build security, not only into the boundary, but also into the applications themselves, versus the market followers who tended to focus on an us-versus-them mentality, or just boundary security.

… Companies that manage risk more effectively and more automated definitely outperformed their peers. As a technology company, we're always looking at the infrastructure. We're always talking about how infrastructure can lead to competitive advantage, and we saw that. But a lot of times we forget the people and process side of the equation.

Companies that manage risk more effectively and more automated definitely outperformed their peers.



One of the other areas that jumped out at me was the need for clarity and agreement of key performance indicators (KPIs). Market-leading companies who outperform in revenue over their peers had more clarity within IT about which KPIs were important and had agreement on those KPIs. Everyone is marching and working toward the same goals. That had a huge impact on me as well.

It’s not just about infrastructure. It’s not just about managing risk. It’s also the people/process side of the equation that is critical in market-leading companies.

Gardner: Joel, when you hear that those who are doing well seem to have fewer custom apps, fewer legacy apps, higher utilization rates on their servers, what does that tell you about these types of organizations?

Dobbs: It tells me a couple of things. We'll start with the second one, server utilization. What I think you're seeing there is the affected people who have really done a good job with virtualization. You're not having is a lot of equipment sitting around idle or used at under-capacity. So I suspect virtualization probably plays into that difference significantly for a number of people.

Custom and legacy applications was something I hadn't really thought about until I read this material. I suspect that what you're seeing is probably a result of modernization of the applications that I call commodity applications, things like human resources, some of the financial applications, a lot of things that are generic across businesses. You're probably seeing some of the leaders move to more software-as-a-service (SaaS)-type applications in order to free up their staff to work on things that are much more strategic to their business.

Unique value

So the things that they're working on are probably things that are adding unique value to their business, and they're not spending a lot of cycles doing things with generic applications that they can buy and let somebody else manage.

If you're just doing security on the boundaries, that's a cheap way to do security, if you think about it. You put a firewall in place, you configure the thing, and you do the boundary security stuff. But when you're building another layer of security into your applications, that tells me that there's a lot more focus on the realization of the value of what's in there, in terms of the data and the way that it’s used.

There's very much an intentional focus on protecting not only the perimeter of the institution, but making sure that there's added security and protection within the perimeter. I would expect that folks who are really serious about understanding the value of the information within those systems, and [understanding] the risk to their corporate reputation, should those be compromised, are being very intentional about mitigating those risks.

Gardner: So it's a strategic, comprehensive approach to security across the assets -- including the applications.

Daniel, before we move on, a question on the infrastructure. When I saw this, I said that sounds like services orientation (SOA) -- modernized apps, fewer monolithic stacks, higher utilization vis-à-vis virtualization. Was there anything else that would back up my hunch that services orientation or SOA was also prominent in the way they are doing infrastructure?

Virtualization, in and of itself, did not rise to the surface of market leaders versus followers.



Dorr: You're absolutely right, but the key component here is actually using it for the right purposes. Virtualization was one of the questions, but you'll notice virtualization, in and of itself, did not rise to the surface of market leaders versus followers.

It wasn't just that you're moving to a service-oriented view, but you're actually implementing it in a way that means something to the business. You're actually seeing a change in capacity usage. You're actually seeing a change in custom and legacy applications.

Again, not following that shiny object, but it's implementing it in a way that's strategic to the business, is what we are seeing here that leads to success. It's not just virtualization, but it's using virtualization to its full capacity.

Dobbs: I agree completely.

Gardner: So we have talked a little bit about infrastructure. What were some of the other major areas, Daniel?

Dorr: The second big area was around information. There was a huge difference around the area of audit and compliance. For example, we saw that more than half of the market leaders had automated their audit and compliance, about 52 percent. Market followers tended to be much less. Around 39 percent had automated their audit and compliance.

Information strategy

There was an information strategy in place in both market leaders and market followers. However, market leaders tended to have automated their information-management strategy, versus followers, who just had it documented.

Also, we see a big difference in the use of business intelligence (BI) to automate decision making. About 18 percent of market leaders are automating their decision making using BI tools, while only 7 percent, so less than half of them, less than half of them as leaders, are doing that.

Now, there is still a huge amount of room for growth on both leaders and followers there, but to see only 18 percent rise to the surface already tells you the importance of automating BI decision making as a clear difference for market leadership.

Gardner: Let's go back to Joel on those two items. This gets to a point that I'm really interested in, a movement in business nowadays to much more of a data-driven and analysis-driven decision process. Perhaps the older way might be summed up by the highest paid person's opinion (HPPO) being the way that ultimately decisions were made.

But Joel, how do you react to some of these findings around information management and BI?

Dobbs: There are a couple of things here. One is that there's been an interesting evolution over the last 20 years in this field. We started out in IT automating various business processes. The focus was on making those processes faster or more efficient or something of that sort. As a result of that, we were generating information that had valuable use, but really wasn't being used that much.

What you're seeing with the leaders is that they not only understand it, but they're doing it.



It was during the reengineering revolution in the early '90s that people began to look at that. Along with the uptake of Six Sigma and Lean Sigma, people began looking at harvesting that data that was collected almost as a byproduct of automation and using it for continuous improvement and various other things.

This whole field has matured. Take the example of just the retail industry and all the information that’s collected as a result of point-of-sale processing and things like that. What we've learned is that that’s a rich trove of information that can be mined and used for all kind of things.

What you're seeing with the leaders is that they not only understand it, but they're doing it. That’s a big differentiator between those who understand it and have the insight and the capabilities to take this information and look at it in different ways. I suspect some of the automating of business, the BI automation, as we were talking about, is really a way of going back and using technology to create options for decision making, based on automated looks at data.

Let's talk about the automation of, I think the term you used, Daniel, was the automation of their information strategy, versus documentation. What that tells me is one group is doing it and the other group is just writing it down, and that’s a big difference. It’s like the difference between what most people do with strategy. Most people develop a strategy and there comes nice a book that sits on a shelf somewhere, and very little gets done about it.

The ones who are really leaders are the people who develop a strategy and then part of that strategy is a strategy to implement the strategy. That’s what this automation that you saw among the leaders really reflects -- not just talking about it, but actually doing it.

Single view

Gardner: It strikes me too that this gets back to that theme that we raised earlier about being controlled and being comprehensive as an IT organization. You can’t gain that single view of the customer and you can’t gain insight across an entire business process, purchasing, supply chain, the relationship between cost and outcomes, without that ability to gather all the different bits and pieces and then manage that in some full way.

So it strikes me, again, as an indicator of maturity and comprehensive control vis-à-vis IT and makes them therefore more powerful when it comes to this level of insight. Daniel, what other areas were part of the top findings, and where can we go now to the next stage, which would perhaps a little bit better define what distinguishes leaders?

Dorr: Just to close on that information discussion, I agree completely with Joel’s points. If you think about it, there were seven key attributes that rose to the surface for market leaders, revenue leaders, and revenue followers.

Three of those were around information. Automating your audit and compliance, having an automated information strategy. In other words, as Joel said, doing it, versus just writing it down, and really using BI for decision making. Three out of seven are around information. So clearly this is a key theme for in-market performance.

One of the things we do at HP is workshops for CIOs to help align business and IT and identify the impact that IT can have on the business. This comes up every single workshop we do.

I don’t think we can understate the importance of helping the business see what’s happening and understand what’s happening through automating audit and compliance.



We did it with a retailer recently. It took them days to process in-store information, in order to know what SKUs were selling and how well marketing programs were doing. By the time they had that information, it was too late for them to do anything.

They couldn’t change the SKUs on shelf. They couldn’t update, migrate, manage, or move the marketing program into new regions or what have you. As a result, their performance in-market clearly showed the difference. They were at a 20 percent disadvantage to the revenue leader in their category.

So I don’t think we can understate the importance of helping the business see what’s happening and understand what’s happening through automating audit and compliance, through actually implementing the information management strategy and trying to automate as much as possible decision making using BI.

Dobbs: I would echo that and add one thing. Daniel pointed out that there is increasingly a competitive advantage. The competitive advantage becomes not just doing it, but doing it faster than your competitors and being able to understand the meaning and the application of the data ahead of your competitor.

The retail example is a great one, where you're lagging days behind in your ability to harvest and use the information. Increasingly, the competitive advantage becomes being able to make adjustments and move much more quickly, whether it’s deciding where to place inventory or how much inventory you need to keep on hand, and all those kind of things. Time is money, and being able to move quickly can be a huge advantage.

What about cloud?

Gardner: We haven’t talked too much about cloud computing, and this did come up as one item that distinguishes leaders over laggards. Perhaps we could address that. Daniel, what is it about cloud that popped out in this survey?

Dorr: The focus of the survey was what capabilities clients have today and how that correlates to their revenue performance. We didn’t see a lot of cloud attributes rising to the service in people’s current capabilities. We did, however, see it rising to the surface in the focus area, where we asked IT decision makers, the CIO minus one, what was important to them. We did see a pretty significant difference between what market leaders, revenue leaders, thought was important about cloud versus market followers.

In fact, almost half of revenue leaders see cloud as incredibly important to them versus their peers, almost half of that number in the market followers. So, we're seeing a lot more priority focus on cloud computing going forward.

We didn’t see it driving current revenue performance, which makes sense. Cloud is somewhat of a new technology. We haven’t seen it fully deployed in many cases in driving today’s revenue.

Gardner: For the benefit of our listeners, Daniel, maybe we could just go through the list at a prioritized basis, with descending priority, on what distinguished the leaders over the laggards. I think the top one is security as we mentioned, but let’s just go through it on a list basis, so they can get a sense of the importance.

Cloud is somewhat of a new technology. We haven’t seen it fully deployed in many cases in driving today’s revenue.



Dorr: Sure. Of the 50 attributes that we asked our CIO minus one IT decision makers and directors, what was happening within their IT environment, seven of those attributes rose to the surface, and they fell into three buckets, as we talked about briefly before. One was around the infrastructure side of the equation or the core computing environment, one was around information, and then the final one was around people and processes.

… With the survey, once we identified which specific attributes differentiated market leaders and market laggards or market followers from a revenue perspective, we then put it on a maturity score and we would score them based on those key attributes. You can see a clear difference between those with obviously a higher score, a higher maturity in their IT environment, around those key specific areas and their in-market performance.

Specific areas

S
o from the infrastructure side, it was custom applications and legacy applications. Leaders had fewer custom applications -- 38 percent versus the followers at 45 percent.

Leaders had fewer legacy applications -- 25 percent versus followers at 32 percent.

Leaders used their server capacity more efficiently. They used about 80 percent of their server capacity at peak usage, versus followers using only 71 percent.

Leaders had security built into the applications as well as at the boundary, versus only a boundary-level security, inside/outside view of the world.

In the information area, leaders automated audit and compliance at an average of about 52 percent versus followers at 39 percent.

Leaders had automated their information strategy, versus followers only documenting their information strategy.

Leaders tended to use more BI and automated decision making versus followers. So 18 percent of leaders had automated business decision making using BI, versus followers at only 7 percent.

Then there is the people and processes side -- and this is an area where CIOs can actually start working on right now without spending a cent -- which was clarity and agreement of KPIs. We saw a big difference in market leaders. There was a high degree of clarity within their organizations about what the KPIs were and agreement on those KPIs, versus only a moderate level of agreement within market followers.

That’s an area where CIOs can take action today. They don’t even have to talk to a vendor or an analyst at all. They can walk right into the CEO’s office and start working on that problem today.

Gardner: Let’s move to a separate lens to view this through. One of the things you asked was a series of questions that led to some conclusions about what distinguishes those who do best, and what leaders were focused more on. You broke it out into five different areas and you got some indicators of why it’s important, leaders versus laggards. Perhaps you could run through those as well.

Leaders had security built into the applications as well as at the boundary, versus only a boundary-level security, inside/outside view of the world.



Dorr: At the end of the survey, we asked them areas of importance, and we gave them security, information and insight, infrastructure convergence, application transformation, and cloud computing. We asked them to rank which were the most important to them. And we asked them to rank their current capabilities.

This was different from the attributes. For example, most of our IT decision makers ranked security, defined as keeping the lights on, as the number one priority. When they ranked their current capability, again, they ranked their current capabilities quite high, doing that well today. Although leaders tended to feel they were doing a better job of keeping the lights on, versus revenue followers.

Number two on the list was information and insight, in terms of driving what is important today from an IT organization. Again, the average of how important it is was not significantly different between leaders and followers. What was significantly different was how well they rated themselves.

We saw this in the individual attributes, but also when they ranked it at the end as well. Leaders tended to outperform, or believe they were doing a better job managing information and insight, than their followers by almost twice as much.

No huge difference

T
here were no huge differences on converged infrastructure or applications between leaders and followers, but the area where we saw a big difference was in cloud computing. Leaders ranked it much higher in importance and believed their current capabilities are much higher than their industry peers.

Gardner: Joel, let's go back to something you mentioned earlier. You were a little bit surprised that the difference on some of these areas between the leaders and the laggards was smaller -- there wasn’t a great deal of difference. Which of those were you referring to and what does that tell us about a baseline of IT functionality that everyone has, but it doesn’t really distinguish anyone either?

Dobbs: Some of the things really surprised me, security actually. The magnitude of that difference was somewhat surprising. Things like the infrastructure convergence were actually fairly close. I expected a little bit more of a spread there.

Around information and insight, there's a pretty good difference. It's statistically significant because of the size. In many ways I would have expected that spread to be even larger, because so many laggard companies are really just operationally focused, keeping the lights on, etc.

I was even surprised that you had that large of a percentage that rated themselves very capable. I would have expected it to be lower than that. In some ways, I would have expected more of the leaders to have considered themselves very capable. So that was a little bit lower than I would have expected.

We didn’t see a huge difference in the regions, particularly the U.S., Latin America, or Asia Pacific and Japan.



For cloud computing, the capabilities are probably not that surprising but, again, the spread was a little less than I would have expected. Because it's a new technology, one would expect that the leading companies would have been much more further out front, beginning to look at ways of exploring the capability there.

But you had only about 36 percent versus 20 percent. It's statistically significant, but was somewhat surprising to me that the gap was not even larger than that.

If you look back at how they ranked cloud computing by importance, it's the same sort of thing, I would have expected a higher percentage to have been looking, if that's something that’s potentially very important, particularly at some of the capabilities that are available today in SaaS. It's a way of getting away from having to maintain rudimentary legacy systems that really don’t add a lot of business value.

Gardner: Let's slice and dice this a different way. Daniel, what about regional differences, or similarities, but let's start with differences? What were some of the biggest differences by region that jumped out at you? Then, maybe we could ask Joel to tell us what he thinks that means in terms of the progression of these technologies and maturity models around the globe.

Dorr: We didn’t see a huge difference in the regions, particularly the U.S., Latin America, or Asia Pacific and Japan. In those regions we saw a little bit in terms of platforms -- Windows versus Linux, virtualization, and so forth -- but not huge issues. It was more kind of personal preferences.

Mainframe in Europe


The one that did jump out at us though was Europe. The biggest difference in Europe is that there is actually a growing movement around the mainframe. At the global level, we saw the mainframe was irrelevant to market leaders versus market followers.

In other words, some market leaders were moving more towards the mainframe, some market followers were moving more towards the mainframe. Some market leaders were moving off of the mainframe and some market followers were doing the same. So there was no correlation between a mainframe strategy and in-market performance anywhere, except in Europe.

In Europe, we saw that market leaders were those that were moving and growing their mainframe strategy, versus market followers who were just maintaining their current mainframe strategy.

Gardner: What does that tell you, Joel? What is it about Europe that has them so interested in mainframes or perhaps that leads them to be successful?

Dobbs: That’s a very good question. That’s actually a surprising finding. Having worked in Europe for a number of years earlier in my career, there are two things that I suspect that might be factors, and these are generalizations. So I don’t know if they're applicable in all cases.

In Europe, we saw that market leaders were those that were moving and growing their mainframe strategy.



What you see a lot of times in European companies is a much more conservative approach to a lot of things in business, and IT is certainly one of those. So change doesn't always come as rapidly in some of those cultures as one would see in cultures with a higher risk tolerance, like you may see in U.S. and other areas. That may be one thing.

The other thing I would wonder about is the extent to which the economic environment there may have an impact on this. You're seeing growth in the mainframe sector, largely because companies may be avoiding expensive investments in other technologies and simply expanding upon what they already have. There are a lot of implications, not only in terms of software and licensing and a number of other things, in being moved to another platform.

So I wonder if the economic environment there has been a factor as well. It's hard to say, but that’s interesting and somewhat perplexing finding.

Gardner: It makes sense that they are maintaining their current systems rather than growing and modernizing.

How about vertical industries, Daniel, anything that jumps out there in terms of which vertical industries that you examined and broke out in your survey seemed to be doing well, and for what reasons?

Vertical industries

Dorr: We looked at retail, communication service providers or telco, manufacturing, energy, healthcare, and banking. The results were comparable to what we saw in each of the regions with what we saw at a global level.

We saw a couple of differences in each of the industries. In retail, for example, when it came to their information strategy, a new aspect was how they managed both structured and unstructured data, versus only structured information.

This makes sense, if you think about it from a retail perspective. There is a lot of qualitative information coming in for leaders to understand, not just that the inventories are up or down or sales are up or down, but to understand why. So that was a big one that’s different from a retail perspective.

In communication service providers, no big differences there between what was happening at the global level versus the retail level.

In manufacturing, we saw a little difference in terms of external IT spend on new applications. In this case, leaders were spending less on new applications than followers.

In manufacturing, we saw a little difference in terms of external IT spend on new applications. In this case, leaders were spending less on new applications than followers.



In the energy sector, there were no significant differences there, or in healthcare. In banking, the one biggest one was cloud capabilities. We did see a lot more interest in cloud in banking than we saw in some of the other areas. Otherwise, it was very similar to what we saw at the global level.

Gardner: So we've got some interesting takeaways here about the role of modernizing, gaining visibility, measuring along the way, being comprehensive in how IT approaches these problems, being responsive to the business on the business terms rather than the technology terms, with an emphasis on culture as well and the people and the process. We've talked about this at a high level.

Daniel, for those folks who are intrigued and would like to get some of these statistics and findings themselves, do you have a place they can go to learn more to either perhaps see a slide deck, a white paper? What’s available for them?

Dorr: A couple of places. First of all, you can join us at the HP Discover 2012 event in Las Vegas in June. We'll be presenting these results there and sharing it with attendees there. In addition, they will be posted on hp.com.

Gardner: Great. Joel, what takeaways do you have from this in terms of whether people should readjust their thinking or perhaps take a pause and ask what they can be doing different when they sort of tease out some of the findings here?

Impact of investments


Dobbs: There was an interesting study published by MIT just a month or so ago that looked at a number of companies. What they found is that some of these companies that were investing heavily in IT, the IT investments actually had a greater impact on profitability than the same amount of money invested in research and development or in advertising. That’s a shocking finding.

I think what happens, when you delve underneath these companies who get such great returns on IT, you find two or three different things that are embodied in what we saw in some of the leaders here.

One of them is really good governance around decision making. The second thing is probably ownership of IT by the entire executive team. And I think the third thing is that they're probably measuring their return using business metrics on the investments that they make.

That’s what differentiates the leaders from the laggards -- they're approaching IT holistically as a core part of their business strategy, instead of seeing it as a support function or a back-office function.

That’s what differentiates the leaders from the laggards -- they're approaching IT holistically as a core part of their business strategy.



And things like this study that we've just been talking about today, as well as the MIT study, help add credence to the idea that money is well invested in IT, and I emphasize well-invested. It can have a tremendous payback, but only if you use it wisely.

Gardner: And that sort of runs counter to the perception of IT as a cost center, rather than as an enabler for growth and opportunity.

Dobbs: Precisely.

Gardner: Okay. Daniel, last word to you, are there takeaways or areas that we may not have covered that you think we should also uncover here?

Dorr: Joel said it very eloquently. There is a large body of research. Now, we have HP's own research. We have the MIT study, showing that there is a clear correlation between technology and in-market revenue results. As CIOs, we should feel confident to walk into the CEO’s office and talk to them about the strategic benefits that we can offer the organization.

The two biggest areas that we should be having conversations with our business counterparts today are clearly around information and KPIs. If we have agreement on those, we've covered more than half of the key attributes that we see between market leaders and market followers.

So there's a lot of opportunity for us in IT to start playing an even bigger leadership role in helping our companies innovate and drive in-market results. I look forward to seeing what the results look like two years from now, once we see cloud and other things deployed and driving even bigger benefits.

Gardner: As you point out, there's a lot more room for growth around those BI and analytics benefits. They're already sort of showing a great deal of worth even though we are still early into it.

You've been listening to a sponsored BriefingsDirect podcast discussion on some new findings from a recent survey on priorities for IT organizations and what distinguishes leaders and laggards in the field based on their business outcome.

I'd like to thank our guests, Joel Dobbs, President and CEO of Compass Talent Management Group, as well as Executive in Residence at the School of Business at the University of Alabama at Birmingham. He is also a lead blogger and a member of the Enterprise CIO Forum.

And we've also been joined by Daniel Dorr, Worldwide Solutions Manager for HP Enterprise Marketing. Thanks to you both.

Dobbs: Thank you.

Dorr: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to you for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on the results of a survey that show that innovation focusing on information and KPIs drives substantial positive business results. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: