Showing posts with label Data Protector. Show all posts
Showing posts with label Data Protector. Show all posts

Thursday, September 08, 2016

How Always-Available Data Forms the Digital Lifeblood for a University Medical Center

Transcript of a discussion on how adopting storage innovation protects a large hospital from data disruption and adds operational benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on technology innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how the Nebraska Medical Center in Omaha consolidated and unified its data-protection capacities. We'll learn how adopting storage innovation protects the state's largest hospital from data disruption and adds operational simplicity to complex data lifecycle management.

To describe how more than 150 terabytes of data remain safe and sound, we're joined by Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. Welcome, Jeff.

Jeff Bergholz: Glad to be here.
HPE Data Protector:
Backup with Brains
Learn More Here
Gardner: Tell us about the major drivers that led you to seek a new backup strategy as a way to keep your data sound and available no matter what.

Bergholz: At Nebraska Medicine, we consist of three hospitals with multiple data centers. We try to keep an active-active data center going. Epic is our electronic medical record (EMR) system, and with that, we have a challenge of making sure that we protect patient data as well as keeping it highly available and redundant.

We were on HPE storage for that, and with it, were really only able to do a clone-type process between data centers and keep retention of that data, but it was a very traditional approach.

Bergholz
A couple of years ago, we did a beta program with HPE on the P6200 platform, a tertiary replica of our patient data. With that, this past year, we augmented our data protection suite. We went from license-based to capacity-based and we introduced some new D2D dedupe devices into that, and StoreOnce as well. What that affords us is to easily replicate that data over to another StoreOnce appliance with minimal disruption.

Part of our goal is to keep backup available for potential recovery solutions. With all the cyber threats that are going on in today's world, we've recently increased our retention cycle from 7 weeks to 52 weeks. We saw and heard from the analysts that the average vulnerability sits in your system for 205 to 210 days. So, we had to come up with a plan for what would it take to provide recovery in case something were to happen.

We came up with a long-term solution and we're enacting it now. Combining HPE 3PAR storage with the StoreOnce, we're able to more easily move data throughout our system. What's important there is that our backup windows have greatly been improved. What used to take us 24 hours now takes us 12 hours, and we're able to guarantee that we have multiple copies of the EMR in multiple locations.

We demonstrate it, because we're tested at least quarterly by Epic as to whether we can restore back to where we were before. Not only are we backing it up, we're also testing and ensuring that we're able to reproduce that data.

More intelligent approach

Gardner: So it sounds like a much more intelligent approach to backup and recovery with the dedupe, a lower cost in storage, and the ability to do more with that data now that it’s parsed in such a way that it’s available for the right reason at the right time.

Bergholz: Resource wise, we always have to do more with less. With our main EMR, we're looking at potentially 150 terabytes of data in a dedupe that shrinks down greatly, and our overall storage footprint for all other systems were approaching 4 petabytes of storage within that.

We've seen some 30:1 decompression ratios within that, which really has allowed my staff and other engineers to be more efficient and frees up some of their time to do other things, as opposed to having to manage the normal backup and retention of that.

We're always challenged to do more and more. We grow 20-30 percent annually, and by having appropriate resources, we're not going to get 20 to 30 percent more resources every year. So, we have to work smarter with less and leverage the technologies that we have.

Gardner: Many organizations these days are using hybrid media across their storage requirements. The old adage was that for backup and recovery, use the cheaper, slower media. Do you have a different approach to that and have you gone in a different direction?

Bergholz: We do, and backup is as important to us as our data that exists out there. Time and time again, we’ve had to demonstrate the ability to restore in different scenarios, the accepted time of being able to restore and provide service back. They're not going to wait for that. When clinicians or caregivers are taking care of patients, they want that data as quickly as possible. While it may not be the EMR, it maybe some ancillary documents that they need to be able to get in order to provide better care.
We're able, upon request, to enact and restore in 5-10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

We're able, upon request, to enact and restore in 5 to 10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

Gardner: Is that to say that you're all flash, all SSD, or some combination? How did you accomplish that very impressive recovery rate?

Bergholz: We're pretty much all dedupe-type devices. It’s not necessarily SSD, but it's good spinning disk, and we have the technology in place to replicate that data and have it highly available on spinning disk, versus having to go to tape to do the restoration. We deal with bunches of restorations on a daily basis. It’s something we're accustomed to and our customers require quick restoration.

In a consolidated strategic approach, we put the technology behind it. We didn’t do the cheapest, but we did the best sort of thing to do, and having an active-active data center and backing up across both data centers enables us to do it. So, we did spend money on the backup portion because it's important to our organization.

Gardner: You mentioned capacity-based pricing. For those of our listeners and readers who might not be familiar with that, what is that and why was that a benefit to you?

Bit of a struggle

Bergholz: It was a little bit of a struggle for us. We were always traditionally client-based or application-based in the backup. If we needed Microsoft Exchange email boxes we had to have an Exchange plug-in. If we had Oracle, we had to have an Oracle plug-in, a SQL plug-in.

While that was great, it enabled us to do a lot, it we were always having to get another plug-in thing to do it. When we saw that with our dedupe compression ratios we were getting, going to a capacity-based license allowed us to strategically and tactically plan for any increase that we were doing within our environment. So now, we can buy in chunklets and keep ahead of the game, making sure that we’re effective there.

We're in throes of enacting archive-type solution through a product called QStar, which I believe HPE is OEM-ing, and we're looking at that as a long-term archive-type process. That's going to a linear tape file system, utilizing the management tools that that product brings us to afford the long-term archive of patient information.

Our biggest challenge is that we never delete anything. It’s always hard with any application. Because of the age of the patient, many cases are required to be kept for 21 years; some, 7 years; some, 9 years. And we're a teaching hospital and research is done on some of that data. So we delete almost nothing.
HPE Data Protector:
Backup with Brains
Learn More Here
In the case of our radiology system, we're approaching 250 terabytes right now. Trying to backup and restore, that amount of data with traditional tools is very ineffective, but we need to keep it forever.

By going to a tertiary-type copy, which this technology brings us, we have our source array, our replicated array, plus now, a tertiary array to take that, too, which is our LTFS solution.

Gardner: And with your backup and recovery infrastructure in place and a sense of confidence that comes with that, has that translated back into how you do the larger data lifecycle management equation? That is to say, are there some benefits of knowledge of quality assurance in backup that then allows people to do things they may not have done or not worried about, and therefore have a better business transformation outcome for your patients and your clinicians?
Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Bergholz: From a leadership perspective, there's nothing real sexy about backup. It doesn’t get oohs and ahs out of people, but when you need data to be restored, you get the oohs and ahs and the thank-yous and the praise for doing that. Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Recently, we passed HIMSS Level 7. One of the remarks from that group was that a) we hadn’t had any production sort of outage, and b) when they asked a physician on the floor, what do you do when things go down, and what do you do when you lose something? He said the awesome part here is that we haven’t gone down and, when we lose something, we're able to restore that in a very timely manner. That was noted on our award.

Gardner: Of course, many healthcare organizations have been using thin clients and keeping everything at the server level for a lot of reasons, a edge to core integration benefit. Would you feel more enabled to go into mobile and virtualization knowing that everything that's kept on the data-center side is secure and backed up, not worrying about the fact that you don't have any data on the incline? Is that factored into any of your architectural decisions about how to do client decision-making?

Desktop virtualization

Bergholz: We have been in the throes of desktop virtualization. We do a lot of Citrix XenApp presentations of applications that keeps the data in a data center and a lot of our desktop devices connect to that environment.

The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that, and it's an interesting thought and philosophy. We try to sell it as an ROI-type initiative to start with. By the time you start putting all pieces to the puzzle, the ROI really doesn't pan out. At least we've seen in two different iterations.

Although it can be somewhat cheaper, it's not significant enough to make a huge launch in that route. But the main play there, and the main support we have organizationally, is from a data-security perspective. Also, it's the the ease of managing the virtual desktop environment. It frees up our desktop engineers from being feet on the ground, so to speak, to being application engineers and being able to layer in the applications to be provisioned through the virtual desktop environment.
The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that.

And one important thing in the healthcare industry is that when you have a workstation that has an issue and requires replacement or re-imaging, that’s an invasive step. If it’s in a patient room or in a clinical-care area, you actually have to go in, disrupt that flow, put a different system in, re-image, make sure you get everything you need. It can be anywhere from an hour to a three-hour process.

We do have a smattering of thin devices out there. When there are issues, it’s merely just replaying or redoing a gold image to it. The great part about thin devices versus thick devices is that in lot of cases, they're operating in a sterile environment. With traditional desktops, the fans are sucking air through infection control and all that; there's noise; perhaps they're blowing dust within a room, if it's not entirely clean. SSD devices are a perfect-play there. It’s really a drop-off, unplug, and re-plug sort of technology.

We're excited about that for what it will bring to the overall experience. Our guiding principle is that you have the same experience no matter where you're working. Getting there from Step A to Step Z is a journey. So, you do that a little bit a time and you learn as you go along, but we're going to get there and we'll see the benefit of that.

Gardner: And ensuring the recovery and voracity of that data is a huge part of being able to make those other improvements.

Bergholz: Absolutely. What we've seen from time to time is that users, while they're fairly knowledgeable, save their documents where they save them to. Policy is to make sure you put them within the data center. That may or may not always be adhered to. By going to a desktop virtualization, they won’t have any other choice.

A thin client takes that a step further and ensures that nothing gets saved back to a device, where that device could potentially disappear and cause a situation.

We do encrypt all of our stuff. Any device that's out there is covered by encryption, but still there's information on there. It’s well-protected, but this just takes away that potential.

Gardner: I'm afraid we'll have to leave it there. We've been learning how the Nebraska Medical Center in Omaha consolidated and unified its data protection capacities, and we've heard how adopting storage innovation protects the state's largest hospital from any data disruption and adds operational benefits along the transformational architecture for edge to core data protection.
HPE Data Protector:
Backup with Brains
Learn More Here
So please join me in thanking our guest, Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. Thank you, Jeff.

Bergholz: Thank you.

Gardner: And thanks as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored technology innovation discussions. Thanks again for listening, and do please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how adopting storage innovation protects a large hospital from data disruption and adds operational benefits. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, June 15, 2010

HP Data Protector, a Case Study on Scale and Completeness for Total Enterprise Data Backup and Recovery

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on backing up a growing volume of enterprise data using HP Data Protector.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Washington, DC. We're here the week of June 14, 2010 to explore some major enterprise software and solutions trends and innovations making news across HP's ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Software Universe Live Discussions.

Our topic for this conversation focuses on the challenges and progress in conducting massive and comprehensive backups of enterprise live data, applications, and systems. We'll take a look at how HP Data Protector is managing and safeguarding petabytes of storage per week across HP's next-generation data centers.

The case-study sheds light on how enterprises can consolidate their storage and backup efforts to improve response and recovery times ,while also reducing total costs.

To learn more about high-performance enterprise scale storage and reliable backup, please join me in welcoming Lowell Dale, a technical architect in HP's IT organization. Welcome to BriefingsDirect, Lowell.

Lowell Dale: Thank you, Dana.

Gardner: Lowell, tell me a little bit about the challenges that we're now facing. It seems that we have ever more storage and requirements around compliance and regulations, as well as the need to cut cost. Maybe you could just paint a picture for me of the environment that your storage and backup efforts are involved with.

Dale: One of the things that everyone is dealing with these days is pretty common and that's the growth of data. Although we have a lot of technologies out there that are evolving -- virtualization and the globalization effect with running business and commerce across the globe -- what we're dealing with on the backup and recovery side is an aggregate amount of data that's just growing year after year.

Some of the things that we're running into are the effects of consolidation. For example, we end up trying to backup databases that are getting larger and larger. Some of the applications and servers that consolidate will end up being more of a challenge for some of the services such as backup and recovery. It's pretty common across the industry.

In our environment, we're running about 93,000-95,000 backups per week with an aggregate data volume of about 4 petabytes of backup data and 53,000 run-time hours. That's about 17,000 servers worth of backup across 14 petabytes of storage.

Gardner: Tell me a bit about applications. Is this a comprehensive portfolio? Do you do triage and take some apps and not others? How do you manage what to do with them and when?

Slew of applications

Dale: It's pretty much every application that HP's business is run upon. It doesn’t matter if it's enterprise warehousing or data warehousing or if it's internal things like payroll or web-facing front-ends like hp.com. It's the whole slew of applications that we have to manage.

Gardner: Tell me what the majority of these applications consist of.

Dale: Some of the larger data warehouses we have are built upon SAP and Oracle. You've got SQL databases and Microsoft Exchange. There are all kinds of web front-ends, whether it’s with Microsoft, IIS, or any type of Apache. There are things like SharePoint Portal Services, of course, that have database back-ends that we back up as well. Those are just a few that come to mind.

Gardner: What are the major storage technologies that you are focusing on that you are directing at this fairly massive and distributed problem?

Dale: The storage technologies are managed across two different teams. We have a storage-focused team that manages the storage technologies. They're currently using HP Surestore XP Disk Array and EVA as well. We have our Fibre Channel networks in front of those. In the team that I work on, we're responsible for the backup and recovery of the data on that storage infrastructure.

We're using the Virtual Library Systems that HP manufactures as well as the Enterprise System Libraries (ESL). Those are two predominant storage technologies for getting data to the data protection pool.

Gardner: One of the other trends, I suppose, nowadays is that backup and recovery cycles are happening more frequently. Do you have a policy or a certain frequency that you are focused on, and is that changing?

As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.



Dale: That's an interesting question, because often times, you'll see some induced behavior. For example, we back up archive logs for databases, and often, we'll see a large increase in those. As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.

You can say the same thing about any transactional type of application, whether it's messaging, which is Exchange with the database, with transactional logs, SQL, or Oracle.

So, we see an increase in backup frequency around logs to not only mitigate disk space constraints but to also mitigate our RTO, or RPO I should say, and how much data they can afford to lose if something should occur like logical corruption or something akin to that.

Gardner: Let's take a step back and focus on the historical lead-up to this current situation. It's clear that HP has had a lot of mergers and acquisitions over the past 10 years or so. That must have involved a lot of different systems and a lot of distribution of redundancy. How did you start working through that to get to a more comprehensive approach that you are now using?

Dale: Well, if I understand your question, you're talking about the effect of us taking on additional IT in consolidating, or are you talking about from product standpoint as well?

Gardner: No, mostly on your internal efforts. I know there's been a lot of product activities as well, but let's focus on how you manage your own systems first.

Simplify and reduce

Dale: One of the things that we have to do at the scope or the size that we get to manage is that we have to simplify and reduce the amount of infrastructure. It’s really the amount of choices and configurations that are going on in our environment. Obviously, you won't find the complete set or suite of HP products in the portfolio that we are managing internally. We have to minimize how many different products we have.

One of the first things we had to do was simplify, so that we could scale to the size and scope that we have to manage. You have to find and simplify configuration and architecture as much as possible, so that you can continue to grow out scale.

Gardner: Lowell, what were some of the major challenges that you faced with those older backup systems? Tell me a bit more about this consolidation journey?

Dale: That's a good question as well. Some of the new technologies that we're evolving, such as virtual tape libraries, was one of the things that we had to figure out. What was the use case scenario for virtual tape? It's not easy to switch from old technology to something new and go 100 percent at it. So we had to take a step-wise approach on how we adopted virtual tape library and what we used it for.

We first started with a minimal amount of use cases and little by little, we started learning what that was really good for. We’ve evolved the use case even more, so that in our next generation design that will move forward. That’s just one example.

We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies.



Gardner: And that virtual tape is to replace physical tape. Is that right?

Dale: Yes, really to supplement physical tape. We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies. We'll facilitate that in some cases.

Gardner: You mentioned a little earlier on the whole issue of virtualization. You're servicing quite a bit more of that across the board, not just with applications, but storage and networks even.

Tell me a bit more about the issues of virtualization and how that provided a challenge to you, as you moved to these more consolidated and comprehensive storage and backup approaches?

Dale: One of the things with virtualization is that we saw something that we did with storage and utility storage. We made it such that it was much cheaper than before and easy to bring up. It had the "If you build it, they will come" effect. So, one of the things that we may end up seeing is an increase in the number of operating systems (OSs) or virtual machines (VMs) that we see out there. That's the opposite of the consolidation effect, where you have, say, 10 one-terabyte databases consolidated into one to reduce the overhead.

Scheduling overhead

With VMs increasing and the use case for virtualization increasing, one of the challenges is trying to work with scheduling overhead tasks. It could be anywhere from a backup to indexing to virus scanning and whatnot, and trying to find out what the limitations and the bottlenecks are across the entire ecosystem to find out when to run certain overhead and not impact production.

That’s one of the things that’s evolving. We are not there yet, but obviously we have to figure out how to get the data to the data protection pool. With virtualization, it just makes it a little bit more interesting.

Gardner: Lowell, given that your target is moving -- as you say, you're a fast growing company and the data is exploding -- how do you roll out something that is comprehensive and consolidating, but at the same time your target is moving object in terms of scale and growth?

Dale: I talked previously about how we have to standardize and simplify the architecture and the configuration, so that when it comes time to build that out, we can do it in mass.

For example, quite a few years ago, it used to take us quite a while to bring up a backup infrastructure that would facilitate that service need. Nowadays, we can bring up a fairly large scope environment, like an entire data center, within a matter of months if not weeks. This is how long it would take us. The process from there moves towards how we facilitate setting up backup policies and schedules, and even that’s evolving.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off.



Right now, we're looking at ideas and ways to automate that, so that' when a server plugs in, basically it’ll configure itself. We're not there yet, but we are looking at that. Some of the things that we’ve improved upon are how we build out quickly and then turn around and set up the configurations, as that business demand is then turned around and converted into backup demand, storage demand, and network demand. We’ve improved quite a bit on that front.

Gardner: And what version of Data Protector are you using now, and what are some of the more interesting or impactful features that are part of this latest release?

Dale: Data Protector 6.11 is the current release that we are running and deploying in our next generation. Some of the features with that release that are very helpful to us have to do with checkpoint recoveries.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off. This has helped us in multifold ways. If you have a bunch of data that you need to get backed up, you don’t want to start over, because it’s going to impact the next minute or the next hour of demand.

Not only that, but it’s also helped us be able to keep our backup success rates up and our tickets down. Instead of bringing a ticket to light for somebody to go look at it, it will attempt a few times for a checkpoint recovery. After so many attempts, then we’ll bring light to the issue so that someone would have to look at.

Gardner: With this emphasis on automation over the manual, tell us about the impact that’s had on your labor issues, and if you’ve been able to take people off of these manual processes and move them into some, perhaps more productive efforts.

Raising service level

Dale: What it’s enabled us to do is really bring our service level up. Not only that, but we're able to focus on other things that we weren’t able to focus on before. So one of the things is there’s a successful backup.

Being able to bring that backup success rate up is key. Some of the things that we’ve done with architecture and the product -- just the different ways for doing process -- has helped with that backup success rate.

The other thing that it's helped us do is that we’ve got a team now, which we didn’t have before, that’s just focused on analytics, looking at events before they become incidents.

I’ll use an analogy of a car that’s about to break-down, and the check-engine light comes on. We're able to go and look at that prior to the car's breaking down. So, we're getting a little bit further ahead. We're going further upstream to detect issues, before they actually impact our backup success rate or SLAs. Those are just a couple of examples there.

We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion



Gardner: How many people does it take to run these petabytes of recovery and backup through your next-generation data center. Just give us a sense of the manpower.

Dale: On backup and recovery in the media management side, we’ve got about 25 people total spread between engineering and operational activities. Basically, their focus is on the backup and recovery of the media management side.

Gardner: Let’s look at some examples. Can you describe a time when you’ve needed to do very quick or even precise recovery, and how did this overall architectural approach and consolidation efforts help you on that?

Dale: We’ve had several cases where we had to recover data and go back to the data protection pool. That happens monthly in fact. We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion.

But, we also find the service being used to do database refreshes. So, we’ll have these large databases that they need to make a copy of from production. They end up getting copied over to development or test.

This current technology we are using, the current configuration, with the virtual tape libraries and the archive blogs has really enabled us to get the data backed up quickly and restored quickly. That’s been exemplified several times with either database copying or database recoveries, when those few type of events do occur.

Gardner: I should think these are some very big deals, when you can deliver the recovered data back to your constituents, to your users. That probably makes their day.

Dale: Oh yes, it does save the bacon at the end of the day.

Gardner: Perhaps you could outline, in your thinking, the top handful of important challenges that Data Protector addresses for you at HP IT. What are the really important paybacks that you're getting?

Object copy

Dale: I’ve mentioned checkpoint recovery. There are also some the things that we’ve been able to use with object copy that’s allowed us to balance capacity between our virtual tape libraries and our physical tape libraries. In our first generation design, we had enough capacity on the virtual libraries inside the whole, a subset of the total data.

Data Protector has a very powerful feature called object copy. That allowed us to maintain our retention of data across two different products or technologies. So, object copy was another one that was very powerful.

There are also a couple of things around the ability to do the integration backups. In the past, we were using some technology that was very expensive in terms of using of disk space on our XPs, and using split-mirror backups. Now, we're using the online integrations for Oracle or SQL and we're also getting ready to add SharePoint and Microsoft Exchange.

Now, we're able to do online backups of these databases. Some of them are upwards of 23 terabytes. We're able to do that without any additional disk space and we're able to back that up without taking down the environment or having any downtime. That’s another thing that’s been very helpful with Data Protector.

Gardner: Lowell, before we wrap up, let's take a look into the future. Where do you see the trends pushing this now? I think we could safely say that there's going to still be more data coming down the pike. Are there any trends around cloud computing, mobile business intelligence, warehousing efforts, or real-time analysis that will have an impact on some of these products and processes?

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other.



Dale: With some of the evolving technologies and some of the things around cloud computing, at the end of the day, we'll still need to mitigate downtime, data loss, logical corruption, or anything that would jeopardize that business asset.

With cloud computing, if we're using the current technology today with peak base backup, we have to get the data copied over to a data protection pool. There still would be the same approach of trying to get that data. If there is anything to keep up with these emerging technologies, for example, maybe we approach data protection a little bit differently and spread the load out, so that it’s somewhat transparent.

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other. I mentioned virtualization earlier. Some of the tools with content-awareness and indexing has overhead associated with it.

I think you're going to start seeing these portfolio products talking to each other. They can schedule when to run their overhead function, so that they stay out of the way of production. It’s just a couple of challenges for us.

We're looking at new configurations and designs that consolidate our environment. So we're looking at reducing our environment from 50-75 percent just by redesigning our architecture and making available more resources that were tied up before. That's one goal that we're working on right now. We're deploying that design today.

And then, there's configuration and capacity management. This stuff is still evolving, so that we can manage the service level that we have today, keep that service level up, bring the capital down, and keep the people required to manage it down as well.

Gardner: Great. I'm afraid we're out of time. We've been focusing on the challenges and progress of conducting massive and comprehensive backups of enterprise-wide data and applications and systems. We've been joined by Lowell Dale, a technical architect in HP's IT organization. Thanks so much, Lowell.

Dale: Thank you, Dana.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 Conference in Washington DC. Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of HP-sponsored Software Universe live discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on the backing up a growing volume of enterprise data using HP Data Protector. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: