Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

Tuesday, April 19, 2011

Tag-Team of HP Workshops Provides Essential Path to IT Maturity Assessment and a Data Center Transformation Journey

Transcript of a sponsored podcast discussion on two HP workshops that help businesses determine actual IT needs and provide a roadmap for improving data center operations and efficiency.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on some fast-moving trends by addressing the need for data center transformation (DCT). We'll also identify some proven ways that explore how to do DCT effectively.

The pace of change, degrees of complexity, and explosion around the uses of new devices and increased data sources are placing new requirements and new strain on older data centers. Research shows that a majority of enterprises are either planning for or are in the midst of data center improvements and expansions.

Deciding how to best improve your data center however is not an easy equation. Those building new data centers need to contend with architectural shifts to cloud and hybrid infrastructure models, as well as the need to cut total cost and reduce energy consumption for the long-term.

An added requirement for new data centers is to satisfy the needs of both short-and long-term goals, by effectively jibing the need for agility now with facility decisions that may well impact the company for 20 years or more.

We are going to examine two ongoing HP workshops as a means for better understanding DCT and for accurately assessing a company’s maturity in order to know how to begin a DCT journey and where it should end up.

We're here with rather three HP experts on the Data Center Transformation Experience Workshop and the Converged Infrastructure Maturity Model Workshop. Please join me now in welcoming Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solution for HP Enterprise Business. Welcome, Helen.

Helen Tang: Thanks, Dana.

Gardner: We're also here with Mark Edelmann, Senior Program Manager at HP’s Enterprise Storage, Servers, and Network Business Unit. Welcome, Mark.

Mark Edelmann: Thank you, Dana. Good to be here.

Gardner: And also Mark Grindle, Business Consultant for Data Center Infrastructure Services and Technology Services in HP Enterprise Business. Welcome, Mark.

Mark Grindle: Hi, Dana. Thanks a lot.

Gardner: Helen, as I mentioned, this is a very difficult situation for organizations. Lots of conflicting data is coming in, and many changes, many different trends are impacting this. Why don’t we try to set the stage a little bit for why DCT is so important, but also why it's no easy task.

Exciting times

Tang: Absolutely, Dana. As you said, there are a lot of difficulties for technology, but also if you look at the big picture, we live in extremely exciting times. We have rapidly changing and evolving business models, new technology advances like cloud, and a rapidly changing workforce.

What the world is demanding is essentially instant gratification. You can call it sort of an instant-on world, a world where everything is mobile, everybody is connected, interactive, and things just move very immediately and fluidly. All your customers and constituents want their need satisfied today, in an instant, as opposed to days or weeks. So, it takes a special kind of enterprise to do just that and compete in this world.

You need to be able to serve all of these customers, employees, partners, and citizens -- 0r if you happen to be a government organization -- with whatever they want or need instantly, any point, any time, through any channel. This is what HP is calling the Instant-On Enterprise, and we think it's the new imperative.

Gardner: When you say instant-on, it means that companies have to respond to their customers at almost lightning speed, but we are talking about infrastructures that can take years to build out. How do you jibe the two, the need to be instant, in terms of how you respond, but recognizing that this is a very difficult, complex, and timely process?

Tang: Therein lies the challenge. Your organization is demanding ever more from IT -- more innovation, faster time to market, more services -- but at the same time, you're being constrained by older architectures, inflexible siloed infrastructure that you may have inherited over the years. How do you deliver this new level of agility and be able to meet those needs?

You have to take a transformational approach and look at things like converged infrastructure as a foundation for moving your current data center to a future state that’s able to support all of this growth, with virtualized resource pools, integrated automated processes across the data center, with an energy-efficient future-proofed physical data center design, that’s able to flex and meet these needs.

Gardner: Of course, one of the larger trends too is that technology is just more important to more companies in more ways. This is not something you do just to support your employees. It really is core to most companies in how they actually conduct business, and is probably one of the chief determinants of their success.

So doing DCT is really part and parcel with how well you actually run your business -- or am I overstating it?

Tang: That’s absolutely true. We talked earlier about how being an Instant-On Enterprise is an imperative. Why do we call it that? Well, because these vast changes are coming, and you don’t have a choice.

If you look at just a few examples of some of these changes in the world of IT, number one is devices. I think you mentioned this earlier. There’s an explosion of devices being used: smartphones, laptops, TouchPads, PDAs. According to the Gartner Group, by 2014, that’s less than three years, 90 percent of organizations will need to support their corporate applications on personal devices. Is IT ready for that? Not by a long shot today.

Architecture shifts

Another trend that we see is some of these architecture shifts. Cloud obviously is very hot today, but two or three years ago a lot of CIOs pooh-poohed the idea and said, "Oh, that’s not real. That’s just hype." Well, the trend is really upon us.

Another Gartner stat: in the next four years, 43 percent of CIOs will have the majority of their IT infrastructure and organizations and apps running in the cloud or in some sort of software-as-a-service (SaaS) technology. Most organizations aren’t equipped to deal with that.

Last but not least, look at your workforce. In less than 10 years about half of the workforce will be millennials, which is defined as people born between the year of 1981 and 2000 -- the first generation to come of age in the new millennium. This is a Forrester statistic.

This younger generation grew up with the Internet. They work and communicate very differently from the workforce of today and they will be a main constituency for IT in less than 10 years. That’s going to force all of us to adjust to different types of support expectations, different user experiences, and governance.

Maturity is a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate and adaptive manner.



Gardner: So, as we recognize that the workloads, the requirements placed on IT are shifting, the data center needs to respond to that as well. I guess it’s important to know where you are, how well you have done in adjusting to what you have been serving up in the last several years in order to know what you need to do in order to be able to provide for these new requirements that we are describing.

Let’s start talking about one of these first workshops. It’s about the Maturity Model, a better understanding of where you are. I guess there is an order to these workshops. This one seems to be in the right order. You have to know where you are before you can decide where to go.

So let’s move to Mark Edelmann. Tell me a little bit about the Converged Infrastructure Maturity Model and why it’s important, as I said, to know where you are before you start charting the course in any detail to the future.

Edelmann: Before we dive into the maturity model though, I recently bumped into a definition on Wikipedia about maturity and I thought it might be useful to consider your IT environment as you listen to this definition that I picked up.
"Maturity is a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate and adaptive manner. The response is generally learned rather than instinctive and is not determined by one’s age. Maturity also encompasses being aware of the correct time and place to behave and knowing when to act appropriately according to the situation."
Now, that probably sounds a little bit like what you might want your infrastructure to behave like and to actually achieve a level of maturity, and that’s exactly what the Maturity Model Workshop is all about.

Overall assessment

The Maturity Model consists of an overall assessment, and it’s a very objective assessment. It’s based on roughly 60 questions that we go through to specifically address the various dimensions, or as we call them domains, of the maturity of an IT infrastructure.

We apply these questions in a consultative, interactive way with our customers, because some of the discussions can get very, very detailed. Asking these questions of many of our customers that have participated in these workshops has been a new experience. We're going to ask our customers things that they probably never thought about before or have only thought of in a very brief sort of a way, but it’s important to get to the bottom of some of these issues.

As a result of examining the infrastructure’s maturity along these lines, we're able to establish a baseline of the maturity of the infrastructure today. And, in the course of interviewing and discussing this with our customers, we also identify where they would like to be in terms of their maturity in the future. From that, we can put together a plan of how to get from here to there.

Gardner: When you say a workshop, are these set up so that people physically go there and you have them in different places, or is there a virtual version where people can participate regardless of where they are? How does that work?

Edelmann: We've found it’s much more valuable to sit down face to face with the customer and go through this, and it actually requires an investment of time. There’s a lot of background information that has to be gathered and so forth, and it seems best if we're face to face as we go through this and have the discussion that’s necessary to really tease out all the details.

The impact of mergers and acquisitions has kind of forced some customers to put together different technologies, different platforms, using different vendors.



Gardner: I'd like to understand a little bit more, Mark, why you break out maturity versus installed base. Help me understand what it takes in order to succeed and what you typically find with these companies? Do they find that they are further ahead than they thought or further behind when we look at this through that distinct lens of maturity?

Edelmann: Most of our customers find out that they are a lot further behind than they thought they were. It's not necessarily due to any fault on their part, but possibly a result of aging infrastructure, because of the economic situation we have been in, disparate siloed infrastructure as a result of building out application focused stacks, which was kind of the way we approached IT historically.

Also, the impact of mergers and acquisitions has kind of forced some customers to put together different technologies, different platforms, using different vendors and so forth. Rationalizing all that can leave them in kind of a disparate sort of a state. So, they usually find that they are a lot further behind than they thought.

Gardner: And, because you've been doing this for quite some time and you've been doing it around the world, you have a pretty good set of data. You have some good historical trend lines to examine, so you have certain domains and certain stages of maturity that you have been able to identify.

Maybe you could help us understand what those are and then relate how folks can then place themselves on those lines, not only to know where they are, but have a sense of how far it is they need to go to get to that higher level of maturity they're seeking.

Edelmann: Sure. We can talk through that level of detail and you can familiarize yourself, at least verbally, with how this model is set up and so forth.

4x5 matrix

Picture, if you will, a 4x5 matrix. We examine the customer’s infrastructure in four, what we call, domains. These domains consist of technology and architecture, management tools and processes, the culture and IT staff, and the demand, supply, and IT governance aspects of the infrastructure and the data center operations. Those are the four domains in which we ask these questions and make our assessment.

From that, as we go through this, through some very detailed analysis that we have done over the years, we're able to position the customer’s infrastructure in one of five stages:
  • The first stage, which is where most people start, is in Stage 1; we call that Compartmentalized and Legacy, which is rather essentially the least-mature stage.
  • From there we move to Stage 2, which we call Standardized.
  • Stage 3 then is Optimized.
  • Stage 4 gets us into Automated and a Service-Oriented Architecture (SOA), and,
  • Stage 5 is more or less IT utopia necessary to become the Instant-On Enterprise that Helen just talked about. We called that Adaptively Sourced Infrastructure.
We evaluate each domain under several conditions against those five stages and we essentially wind up with a baseline of where the customer stands.

We've been doing this for a while and we've done a lot of examinations across the world and across various industries. We have a database of roughly 1,400 customers that we then compare the customer’s maturity to. So, the customer can determine where they stand with regards to the overall norms of IT infrastructures.

It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.



We can also illustrate to the customer what the best-in-class behavior is, because right now, there aren’t a whole lot of infrastructures that are up at Stage 5. It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.

Gardner: I want to make sure I've got this straight in terms of the order of these workshops and why how they play off of one another. Maybe, Helen, you could come back in and help understand which one you see people doing first and which one you think is the one that makes the more sense?

Tang: Both workshops are great. It's not really an either/or. I would start with the Data Center Transformation Experience Workshop, because that sets the scene in the background of how I start to approach this problem. What do I think about? What are the key areas of consideration? And, it maps out a strategy on a grander scale.

The CI Maturity Model Assessment specifically gets into when you think about implementation. Let's dive in and really drill deep into your current state versus future state when it comes to these five domains that Mark just described.

Gardner: Let's go now to the Data Center Transformation Experience Workshop with Mark Grindle. First, do you share Helen’s perspective on the order, and what would people gain by entering into the Data Center Transformation Experience Workshop first? Then, you can then fill us in a little bit on what it's about?

Interesting workshop

Grindle: Thanks, Dana. I agree with what Helen said. It really is more structured if you do the Data Center Transformation Experience Workshop first and then follow that up with the Maturity Model. It's very interesting workshop, because it's very different from any other workshop, at least that I have ever participated in. It's not theoretical and it's also extremely interactive.

It was originally designed and set up based on HP IT’s internal transformation. So, it's based on exactly what we went through to accomplish all the great things that we did, and we've continued to refine and improve it based on our customer experiences too. So, it's a great representation of our internal experiences as well as what customers and other businesses and other industries are going through.

During the process, we walk the customer through everything that we've learned, a lot of best practices, a lot of our experiences, and it's extremely interactive.

Then, as we go through each one of our dimensions, or each one of the panels, we probe with the customer to discuss what resonates well with them, where they think they are in certain areas, and it's a very interactive dialog of what we've learned and know and what they've learned and know and what they want to achieve.

The outcome is typically a very robust document and conversation around how the customer should proceed with their own transformation, how they should sequence it, what their priorities are, and true deliverables -- here are the tasks you need to take on and accomplish -- either with our help or on their own.

It’s a great way of developing a roadmap, a strategy, and an initial plan on how to go forward with their own transformational efforts.



It’s a great way of developing a roadmap, a strategy, and an initial plan on how to go forward with their own transformational efforts.

Gardner: And the same question to you, Mark Grindle, about location. Is this something you prefer to do face to face as Mark Edelmann mentioned, or is this something that people can gather virtually or through road shows? How does it actually come to the market?

Grindle: It absolutely has to be face-to-face. We use a very large conference room and we set up these panels around the room. Each one of these panels is floor to ceiling and height. There are about 4 feet by 5, or 5.5 feet high, and we walk through a series of 10 panels that approaches each of the dimensions of transformation, as we look at it.

So having all the people in the room and being able to be interactive face to face, as well as reference panels that you might have gone through or that you are about to go through as different points in the conversation come up, is critical to having a successful workshop.

Designed around strategy

It's definitely designed around strategy. Most people, when they look at transformation, think about their data centers, their servers, and somewhat their storage, but really the goal of our workshop is to help them understand, in a much more holistic view, that it's not just about that typical infrastructure. It has to do with program management, governance, the dramatic organizational change that goes on if you go through transformation.

Applications, the data, the business outcomes, all of this has to be tied in to to ensure that, at end of the day, you've implemented a very cost-effective solution that meets the needs of the businesses. That really is a game-changing type of move by your organization.

Gardner: And, as part of some of the trends we mentioned, building these for the long-term means that you're building for operational efficiency. The total cost, of course, over time is going to be that ongoing operational penalty or, if you do it right, perhaps payback. How do you help people appreciate the economics of the data center, and how important is that to people in these workshops?

Grindle: The financials are absolutely critical. There are very few businesses today that aren’t extremely focused on their bottom line and how they can reduce the operational cost.

Certainly, from the HP IT experience, we can show, although it's not a trivial investment to make this all happen, the returns are not only normally a lot larger than your investment, but they are year-over-year savings. That’s money that typically can be redeployed to areas that really impact the business, whether it's through manufacturing, marketing, or sales. This is money that can be reinvested in the business, and allowed to help grow the areas that really will have future impact on the growth of the business, while reducing the cost of your data centers and your operation.

Even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology.



Interestingly enough, what we find is that, even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology. You actually have to implement new technologies and robust technologies to help bring your cost down. Things like automations, operational efficiency, ITIL processes all help you drive the saving while you are allowed to upgrade your systems and your environments to current technologies and new technologies.

And, while we're on the topic of cost savings, a lot of times when we are talking to customer about transformation, it's normally being driven by some critical IT imperative, like they're out of space in their data center and they're about to look at building out a new data center or perhaps a obtaining a collocation site. A lot of times we find that we sit down and talk with them about how they can modernize their application, tier their storage, go with higher density equipment, virtualize their servers, they actually can free up space and avoid that major investment of the new data center.

Gardner: That gets back to the definition of maturity, where it might not necessarily mean bringing in trucks and pouring cement. It could very well mean transforming in a way that ekes out more productivity from your existing facilities before you rush into something new. Is that typically the case? How often does that really happen where you can wring out enough efficiency to postpone the actual new facility?

Grindle: It happens time and time again. I am working with a company right now that was looking at going to eight data centers and by implementing a lot of these new technologies -- higher virtualization rates, improvements to their applications, and better management of their data on their storage. We're trying to get them down into two data centers. So right there is a substantial change. And, that’s just an example of things that I have seen time and time again, as we've done these workshops.

A big part of this is working through what the customer really needs and what their business drivers really are. In some cases, we're finding out that brick and mortar aren’t really the right solutions for their data centers. They should look at collocation or even at more creative solutions like the HP Data Center POD, where you can stand up one of these containers filled with high density, very modern equipment, and meet all their needs without doing anything to your existing data center.

It's all about walking through the problems and the issues that are at hand and figuring out what the right answers are to meet their needs, while trying to control the expense.

What's next?

Gardner: Okay, I am starting to get it now. I see why these two workshops play off of one another, because you are laying out all the things that have happened at HP, what to expect, and what some of the alternatives are. That way you've got in your mind a set of alternative directions. Then, by doing the Maturity Model, you get a sense of where you are and where you can go, and putting the two together can start you on that path.

Let’s look at that future path a little bit. Folks have taken these workshops and gotten a better sense of the holistic full total equation. What usually happens next? What's the process from research, understanding, and knowledge to actually starting to hammer out a definition of what you and your particular situation as an organization should do?

Let me fire that first off at you, Helen.

Tang: As often happens, it depends. It’s based on your organization’s business needs. Where are you trying to go in the next year, two years, or five years? It’s also based on the level of constraint that you face right now in the data center.

We see one of two paths. In the more transformational approach, whereby you have the highest level of buy-in, all the way up to the CIO and sometimes CFO and CEO, you lay out an actual 12-18 month plan. HP can help with that, and you start executing towards that. You say, "Okay, what would be the first step?" A lot of times, it makes sense to standardize, consolidate. Then, what is the next step? Sometimes that’s modernizing applications, and so on. That’s one approach we have seen.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation. Then, we take a more project-based approach. It still helps them a lot going through these two workshops. They get to see the big picture and all the things that are possible, but they start picking low-hanging fruit that would yield the highest ROI and solve their current pain points.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation.



Often, in these past few years, it has been virtualization. What is my current virtualization level? How do I take it up to maximum efficiency? And then, look to adjacent projects. So, the next step might be consolidation, or automation, and so on.

Gardner: Mark Edelmann, same to you. Are there some typical scenarios that you've seen that folks when they have digested the implications from these workshops then have a vision or a direction, and what typically would that be?

Edelmann: Helen did a great job of outlining it, because different customers start at different places and they are headed for different places. Often, the journey is a little bit different from one customer to the other.

The Maturity Model Workshop you might think of as being at a little lower level than the Data Center Transformation Workshop. As a result of the Maturity Model Workshops, we produce a report for the customer to understand -- A is where I'm at, and B is where I'm headed. Those gaps that are identified during the course of the assessment help lead a customer to project definitions.

In some cases, there may be some obvious things that can be done in the short term and capture some of that low-hanging fruit -- perhaps just implement a blade system or something like that -- that will give them immediate results on the path to higher maturity in their transformation journey.

Multiple starting points

There are multiple starting points and consequently multiple exit points from the Maturity Model Workshop as well.

Gardner: Mark Grindle, same kind of question. How do people take what they've gathered here to use it? Any stories or anecdotes about what you have seen people do with this that has helped them?

Grindle: Mark and Helen were both right in their comments. The result of the workshop is really a sequence series of events that the customer should follow up on next. Those can be very specific items, like gather your physical server inventories so that that can be analyzed, to other items such as run a Maturity Model Workshop, so that you can understand where you are in each of the areas and what the gaps are, based on where you really want to be.

It’s always interesting when we do these workshops, because we pull together a group of senior executives covering all the domains that I've talked about -- program management, governance -- their infrastructure people, their technology people, their applications people, and their operational people, and it’s always funny, the different results we see.

I had one customer that said to me that the deliverable we gave them out in the workshop was almost anti-climatic versus what they learned in the workshop. What they had learned during this one was that many people had different views of where the organization was and where it wanted to go.

It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head.



Each was correct from their particular discipline, but from an overarching view of what are we trying to do for the business, they weren’t all together on all of that. It’s funny how we see those lights go on as people are talking and you get these interesting dialogs of people saying, "Well, this is how that is." And someone else going, "No, it’s not. It’s really like this."

It’s amazing the collaboration that goes on just among the customer representatives above and beyond the customer with HP. It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head. It ends up motivating people to start taking those next actions and figuring out how they can move their data centers and their IT environment in a much more logical, and in most cases, aggressive fashion than they were originally thinking.

Gardner: It sounds like a very powerful exercise for a lot of different reasons. For those folks interested, how could they learn more about these workshops? Are there some resources out there whereby they go to find them? Let me start with you, Helen.

Tang: The place to go would be hp.com/go/dct.

Gardner: That’s pretty straightforward. Any other thoughts Mark and Mark about where you could go to pursue information if you were starting to get interested in these workshops?

Edelmann: Well, it’s probably not a big surprise, but to learn more about the CI Maturity Model, you can go to hp.com/go/cimm.

Gardner: And Mark Grindle?

Grindle: I agree with both of those. Obviously your HP account rep can help you. We have an HP IT Forum coming up soon. For people who are attending, we do mini workshops during this event. We set up a day that individual customers can come in for an hour and we walk them through each one of the panels very quickly and give them a flavor for what the full workshop would look like. There are a lot of options here for people to get a better understanding of the workshop and how it can help them.

Gardner: So, you can get the appetizer before the entrée?

Grindle: Absolutely.

Gardner: Well, thank you. You have been listening to a sponsored podcast discussion on the need for DCT and some proven ways that explore how to do DCT effectively.

I would like to thank our guests. We have been joined by Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions for HP Enterprise Business. Thanks again, Helen,

Tang: Thanks, Dana. Always a pleasure.

Gardner: And Mark Edelmann, Senior Program Manager, HP’s Enterprise Storage, Servers, and Networking Business Unit. Thanks to you, Mark.

Edelmann: Thank you, Dana.

Gardner: And lastly, Mark Grindle, Business Consultant, Data Center Infrastructure Services in the Technology Services within HP Enterprise Business. Thanks to you.

Grindle: Thank you, Dana. It was great being here.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Transcript of a sponsored podcast discussion on two HP workshops that help businesses determine actual IT needs and provide a roadmap for improving data center operations and efficiency.Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Tuesday, August 31, 2010

Explore the Myths and Means of Scaling Out Virtualization Via Automation Across Data Centers

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the improved and increased use of virtualization in data centers. We'll delve into how automation and policy-driven processes and best practices are offering a slew of opportunities for optimizing virtualization. Server, storage, and network virtualization use are all rapidly moving from points of progress into more holistic levels of adoption.

The goals are data center transformation, performance and workload agility, and cost and energy efficiency. But the trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization. Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

In some cases, levels of virtualization are stalling at 30 percent adoption, yet other data centers are leveraging automation and best practices and moving to 70 percent and even 80 percent adoption rates. By taking such a strategic outlook on virtualization, we'll see how automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

Here now to discuss how automation can help you achieve strategic levels of virtualization adoption are our guests, Erik Frieberg, Vice President of Solutions Marketing at HP Software. Welcome to BriefingsDirect, Erik.

Erik Frieberg: Great. Good to be here.

Gardner: And, we're here with Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. Welcome, Erik Vogel.

Erik Vogel: Well, thank you.

Gardner: Let's start the discussion with you Erik Frieberg. Tell me, why there is a misconception about acceptable adoption levels of virtualization out there?

Frieberg: When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

IT is in the process of serving the business, and the business is asking for whole application service provisioning. So it's not just these individual elements, but tying them all together along with middleware, databases, objects and doing this whole stack provisioning.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.

Gardner: Is there something in general that folks don't appreciate around this level of acceptable use of virtualization, or is there a need for education?

Perceptible timing

Frieberg: It comes down to what I call the difference in perceptible timing. Often, when businesses are asking for new applications or services, the response is three, four, or five weeks to roll something out. This is because you're automating individual pieces but it's still left to IT to glue all the individual element automation together to deliver that business service.

As companies expand their use of automation to automate the full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

Gardner: I see. So we're really moving from a tactical approach to a strategic approach?

Frieberg: Exactly.

Gardner: What about HP? Is there something about the way that you have either used virtualization yourselves or have worked with a variety of customers that leads you to believe that there is a lot more uptake? Are we really only in the first inning or two of virtualization from HP's perspective?

Frieberg: We're maybe in the second inning, but we're certainly early in the life cycle. We're seeing companies moving beyond the traditional automation, and their first goal, which is often around freeing up labor for common tasks.

Companies will look at things like how do they baseline what they have, how they patch and provision new services today, moving on to what is called deployment automation, and the ability to move applications from the development environment into the production environment.

They're asking how do I establish and enforce compliance policies across my organization.



You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

So we're maturing and moving to further "innings" by automating the process more and also getting further benefits around compliance and best practices for use through our automation efforts.

Gardner: When you can move in that direction, at that level, you start to really move into what we call data center transformation, rather than spot server improvements or rack-by-rack improvements.

Frieberg: Exactly. This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Gardner: Erik Vogel, are there other misconceptions that you’ve perceived in the marketplace in terms of where virtualization adoption can go?

Biggest misconception

Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Gardner: Erik Frieberg, back to you. Perhaps there is a laundry list of misconceptions that we can go through and then discount them. If we're going to go from that 30 percent into that strategic level, what are some specific things that are holding people back?

Frieberg: When I talk to customers about their use of virtualization, you're right. They virtualize the easy stuff.

The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

Gardner: Let's now get into what we mean by automation. How do you go about automating in such a way that you don't fall into these traps and you can enjoy the things that you've been describing in terms of better compliance, better process, and repeatability?

Provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT . . . is really moving beyond what a lot of people can manage.



Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate. We don’t need to provision a PC 16 different ways. We actually can do it one way with three variations. When you do that, you now move up the ability to automate that process. Then, you use that individual process automation or element automation in the larger process, and tie it all together.

That’s how we see companies or organizations moving up this maturity curve within automation.

Gardner: I was intrigued by that RBA example you gave. There are occasions where folks think they're automated, but are not. Is there a way to have a litmus test as to whether automation is where you need to go, not actually where you’ve been?

The easy aspects

Frieberg: Automation is similar to the statistics you gave in virtualization, where people are exploring automation and they're automating the easy aspects, but they're hitting roadblocks in understanding how they can drive automation further in their organization.

Something I have used as a litmus test is that run book. How thick is it now and how thick was it a month ago or a year ago, when you started automation? How have you consolidated it through your automation processes?

We see companies not just trying to standardize, consolidate, or make tough choices that will enable them to push the automation further. A lot of it is just a hard-held belief of what can be automated in IT versus what can't. It's very analogous to them approaching virtualization -- I can do these types of workloads, but not these others. A lot of these beliefs are held in old facts and not based on what the technology or new software solutions could do today.

Gardner: So, perhaps an indication of where they are actually doing automation is is that run book is getting smaller?

Frieberg: Exactly. The other thing I look at, as companies start to roll out applications, is not just the automation, but the consistency. You read different facts within the industry. Fifty percent of the time, when you make a change into your environment, you cause an unforeseen downstream effect. You change something, but something else breaks further down.

They look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.



When you automate processes, we tend to see that drop dramatically. Some estimates have put the unforeseen impact as low as five percent. So, you can also measure your unforeseen downstream effects and ask, "Should I automate these processes that seem to be tedious, time-consuming, and non-compliant for people to do, and can I automate them to eliminate these downstream effects, which I am trying to not have occur in my organization?"

Gardner: Erik Vogel, when these folks recognize that they need to be more aggressive with automation in order to do virtualization better, enjoy their cost performance improvements, and ultimately get towards their data center transformation, what is it that they need to be thinking of? Are performance and efficiency the goals? How do we move toward this higher level of virtualization?

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the impediments, unfortunately, is that there is kind of an economic hold. The way we're creating these business cases today doesn't show the true value and benefit of enhanced virtualization automation. We need to rethink the way we put these business cases together to really incorporate a lot of the bigger benefits that we're seeing with clients who have moved to a higher percentage of virtualization.

Gardner: In order to attain that business benefit to make the investment a clear winner and demonstrate the return what is it that needs to happen? Is this a best-of-breed equation, where we need to pull together the right parts? Is it the people equation about the operations, or all of the above? And how does HP approach that stew of different elements within this?

All of the above

Vogel: It's really all of the above. One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

If we're just looking at virtualizing and just moving from physical to virtual devices, we may be reducing our asset footprint and gaining the benefits of just managing fewer physical assets. From a logical standpoint, we still have the same number of servers and the same number of OS instances. So, we still have the same amount of complexity in managing the environment.

The benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.

But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.



Gardner: It sounds like we're moving beyond that tactical benefit of virtualization, but thinking more about an operational fabric through which to support a variety of workloads -- and that's quite a leap.

Vogel: Absolutely. In fact, when we start talking about moving to a cloud-type environment, specifically within public cloud and private cloud, we're looking at having to do that process work and governance work. It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. And then, how do we de-provision, when we're done with a particular environment; and how do we enable that environment to scale up and scale down, based on the demands of the workloads that are being run on that environment.

So, it's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

Gardner: And yet, it's at that higher level of adoption that the really big paybacks kick in. Are there some misconceptions or some education issues that are perhaps holding companies back from moving toward that larger adoption, which will get them, in fact, those larger economic productivity and transformative benefits?

Lifecycle view

Vogel: The biggest challenge where education needs to occur is that we need to be looking at IT through a lifecycle view. A lot of times we get tied up just looking at an initial investment or what the upfront cost would be to deploy one of these environments. We're not truly looking at the cost to provide that service over a three-, four- or five-year period, because if we start to look carefully at what that lifecycle cost is, we can see that these shared environments, these virtualized environments with automation, are a fraction of the cost of a dedicated environment.

Now, there will need to be an upfront investment. That, I think, is causing a lot of concern for our clients because they look at it only in the short-term. If we look at it over a life-cycle approach and we educate clients to start seeing the cost to provide that service, that's when we start to see that it's easy to make a business case for moving to one of these environments.

It's a change in the way a lot of our clients think about developing business cases. It's a new model and a new way of looking at it, but it's something that's occurring across the industry today, and will continue to occur.

Gardner: I'm curious about the relationship that you’re finding, as the adoption levels increase from net 30 percent to 60 or 70 percent. Are the benefits coming in on a linear basis as a fairly constant improvement? Or is there some sort of a hockey-stick effect, whereby there is an accelerating level of business benefits as the adoption increases?

Vogel: It really depends on the client situation, the type of applications, and their specific environment. Generally, we're still seeing increasing returns in almost a linear fashion, as we move into 60-70 percent virtualized.

Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment.



As we move beyond that, it is client-specific and client-independent. There are a lot of variables and a lot of factors in play, such as the type of applications that are running on it and the type of workloads and demands that are being placed on that environment. Depending on the clients, they can still see benefits when they're 80-85 percent virtualized. Other clients will hit that economic threshold in the 60-65 percent virtualized range.

We do know that we're continuing to see benefits beyond that 30 percent, beyond the easy stuff, as they move into Tier 1 applications. Right now, we're looking at that 60-70 percent as the rule of thumb, where we're still seeing good returns for the investment. As applications continue to modernize and are better able to use virtual technologies, we'll see that threshold continue to increase into the 80-85 percent range.

Gardner: How about the type of payoff that might come as companies move into different computing models? If you have your sights set on cloud computing, private cloud or hybrid cloud at some point, will you get a benefit or dividends from whatever strategic virtualization, governance and policy, and automation practices you may inherit now?

Vogel: I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Now, with that said, we've also seen instances where costs have actually increased as a result of cloud implementation, and that's generally because the governance that was required was not in place. If you move to a virtual environment that's highly automated and you make it very easy for a user to provision in a cloud-type model and you don’t have correct governance in place, we have actually seen virtual server sprawl occur.

Everything pops up

All of a sudden, everybody starts provisioning environments, because it's so easy and everything is in this cloud environment begins to pop up, which results in increased software licensing costs. Plus, we still need to manage those environments.

Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Gardner: So, there is a relationship between governance and automation. You really wouldn’t advise having them separate or even consecutive? They really need to go hand in hand?

Vogel: Absolutely. We've found in many, many client instances, where they've just gone out, procured hardware, and dropped it on the floor, that they did not realize the benefits they had expected from that cloud-type hardware. In order to function as a cloud, it needs to be managed as a cloud environment. That, as a prerequisite, requires strong governance, strong process, security controls, etc. So, you have to look at them together, if you're really going to operationalize a cloud environment, and by that I mean really be able to achieve those business benefits.

Gardner: Erik Frieberg, tying this back to data-center transformation, is there a relationship now that's needed between the architectural level and the virtualization level, and have they so far been distinct?

I guess I'm asking you the typical cultural question. Are the people who are in charge of virtualization and the people who are in charge of data center transformation the same people talking the same language? What do they need to do to make this more seamless?

When you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.



Frieberg: I’ll echo something Erik said. We hear clients talk about how it's not about virtualizing the server, but it's about virtualizing the service. This is where we look at virtualizing a single server and putting it into production by cloning it is relatively straightforwardly. But, when you talk about an entire service and all the elements that make up that service, you're now talking about a whole host of people.

You get server people involved around provisioning. You’ve got network people. You’ve got storage people. Now, you're just talking about the infrastructure level. If you want to put app servers or database servers on top of this, you have those constituents involved, DBAs and other people. If you start to put production-level applications on there, you get application specialists.

You're now talking about almost a dozen people involved in what it takes to put a service in production, and if you're virtualizing that service, you have admins and others involved. So, you really have this environment of all these people who now have to work together.

A lot of automation is by automating specific tasks. But, if you want to automate and virtualize this entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.

The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So, there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.

Gardner: Erik Vogel, it sounds as if this allows for a maturity benefit, or a sense of maturity around these virtualization benefits. This isn’t a one-off. This is a repeatable, almost a core, competency. Is that how you are seeing this develop now? A company should recognize that you need to do virtualization strategically, but you need to bake it in. It's something that's not going to go away?

Capability standpoint

Vogel: That's absolutely correct. I always tend to shy away from saying maturity. Instead, I like to look at it from a capability standpoint. When we look at just maturity, we see organizations that are very mature today, but yet not capable of really embracing and leveraging virtualization as a strategic tool for IT.

So, we've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components. It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.

Gardner: Erik Frieberg, moving toward this category of being a capability rather than a one-off, how do you get started? Are there some resources, some tried and true examples of how other companies have done this?

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months.



Frieberg: At HP Software, we have a number of assets to help companies get started. Most companies start around the area of automation. They move up in the same six-level model -- "What are the basic capabilities I need to standardize, consolidate, and automate my infrastructure?"

As you move further up, you start to move into this idea of private-cloud architectures. Last May, we introduced the Cloud Service Automation architecture, which enables companies to come in and ask, "What is my path from where I am today to where I want to get tomorrow. How can I map that to HP’s reference architecture, and what do I need to put in place?"

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Gardner: Erik Vogel, how about future trends? Are there any developments coming down the pike that you can look in your crystal ball and say, "Here are even more reasons why that capability, maturity, and strategic view of virtualization, looking toward some of the automation benefits, will pay dividends?"

The big trend

Vogel: I think the big trend -- and I'm sure everybody agrees -- is the move to cloud and cloud infrastructures. We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

A major shift that we're seeing in IT is being facilitated by this notion of cloud. IT now has a lot of options in how they procure and source services, and they are now becoming that broker for these services. That’s probably the biggest trend and a lot of it is being driven by this transformation to more cloud-type architectures.

Gardner: Okay, last word to you Erik Frieberg. What trends do you expect will be more of an enticement or setup for automation and virtualization capabilities?

Most people, when they look at their virtualization infrastructure, aren’t going with a single provider.



Frieberg: I'd just echo what Erik said and then add one more aspect. Most people, when they look at their virtualization infrastructure, aren’t going with a single provider. They're looking at having different virtualization stacks, either by hardware or software vendors that provide them, as well as incorporating other infrastructures.

The ability to be flexible and move different types of workload to different virtualized infrastructure is key so having this choice, because that makes you more agile in the way you can do things. It will absolutely lower your cost, providing them the infrastructure that really leads to the higher quality of service that IT is trying to provide to the end users.

Gardner: It also opens up the marketplace for services. If you can do virtualization and automation, then you can pick and choose providers. Therefore, you get the most bang for your buck and create a competitive environment. So that’s probably good news for everybody.

Frieberg: Exactly.

Gardner: We've been discussing how automation governance and capabilities around virtualization can take the sting out of moving toward a strategic level of virtualization adoption. I want to thank our guests. We've had a really interesting discussion with Erik Frieberg, Vice President of Solutions Marketing at HP Software. Thank you, Erik.

Frieberg: Thank you, very much.

Gardner: And also Erik Vogel, Practice Principal and America's lead for cloud resources at HP. Thanks to you also, Erik.

Vogel: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a podcast discussion on how automation and best practices allows for far greater degrees of virtualization and efficiency across enterprise data centers. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, June 15, 2010

HP Data Protector, a Case Study on Scale and Completeness for Total Enterprise Data Backup and Recovery

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on backing up a growing volume of enterprise data using HP Data Protector.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Washington, DC. We're here the week of June 14, 2010 to explore some major enterprise software and solutions trends and innovations making news across HP's ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Software Universe Live Discussions.

Our topic for this conversation focuses on the challenges and progress in conducting massive and comprehensive backups of enterprise live data, applications, and systems. We'll take a look at how HP Data Protector is managing and safeguarding petabytes of storage per week across HP's next-generation data centers.

The case-study sheds light on how enterprises can consolidate their storage and backup efforts to improve response and recovery times ,while also reducing total costs.

To learn more about high-performance enterprise scale storage and reliable backup, please join me in welcoming Lowell Dale, a technical architect in HP's IT organization. Welcome to BriefingsDirect, Lowell.

Lowell Dale: Thank you, Dana.

Gardner: Lowell, tell me a little bit about the challenges that we're now facing. It seems that we have ever more storage and requirements around compliance and regulations, as well as the need to cut cost. Maybe you could just paint a picture for me of the environment that your storage and backup efforts are involved with.

Dale: One of the things that everyone is dealing with these days is pretty common and that's the growth of data. Although we have a lot of technologies out there that are evolving -- virtualization and the globalization effect with running business and commerce across the globe -- what we're dealing with on the backup and recovery side is an aggregate amount of data that's just growing year after year.

Some of the things that we're running into are the effects of consolidation. For example, we end up trying to backup databases that are getting larger and larger. Some of the applications and servers that consolidate will end up being more of a challenge for some of the services such as backup and recovery. It's pretty common across the industry.

In our environment, we're running about 93,000-95,000 backups per week with an aggregate data volume of about 4 petabytes of backup data and 53,000 run-time hours. That's about 17,000 servers worth of backup across 14 petabytes of storage.

Gardner: Tell me a bit about applications. Is this a comprehensive portfolio? Do you do triage and take some apps and not others? How do you manage what to do with them and when?

Slew of applications

Dale: It's pretty much every application that HP's business is run upon. It doesn’t matter if it's enterprise warehousing or data warehousing or if it's internal things like payroll or web-facing front-ends like hp.com. It's the whole slew of applications that we have to manage.

Gardner: Tell me what the majority of these applications consist of.

Dale: Some of the larger data warehouses we have are built upon SAP and Oracle. You've got SQL databases and Microsoft Exchange. There are all kinds of web front-ends, whether it’s with Microsoft, IIS, or any type of Apache. There are things like SharePoint Portal Services, of course, that have database back-ends that we back up as well. Those are just a few that come to mind.

Gardner: What are the major storage technologies that you are focusing on that you are directing at this fairly massive and distributed problem?

Dale: The storage technologies are managed across two different teams. We have a storage-focused team that manages the storage technologies. They're currently using HP Surestore XP Disk Array and EVA as well. We have our Fibre Channel networks in front of those. In the team that I work on, we're responsible for the backup and recovery of the data on that storage infrastructure.

We're using the Virtual Library Systems that HP manufactures as well as the Enterprise System Libraries (ESL). Those are two predominant storage technologies for getting data to the data protection pool.

Gardner: One of the other trends, I suppose, nowadays is that backup and recovery cycles are happening more frequently. Do you have a policy or a certain frequency that you are focused on, and is that changing?

As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.



Dale: That's an interesting question, because often times, you'll see some induced behavior. For example, we back up archive logs for databases, and often, we'll see a large increase in those. As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.

You can say the same thing about any transactional type of application, whether it's messaging, which is Exchange with the database, with transactional logs, SQL, or Oracle.

So, we see an increase in backup frequency around logs to not only mitigate disk space constraints but to also mitigate our RTO, or RPO I should say, and how much data they can afford to lose if something should occur like logical corruption or something akin to that.

Gardner: Let's take a step back and focus on the historical lead-up to this current situation. It's clear that HP has had a lot of mergers and acquisitions over the past 10 years or so. That must have involved a lot of different systems and a lot of distribution of redundancy. How did you start working through that to get to a more comprehensive approach that you are now using?

Dale: Well, if I understand your question, you're talking about the effect of us taking on additional IT in consolidating, or are you talking about from product standpoint as well?

Gardner: No, mostly on your internal efforts. I know there's been a lot of product activities as well, but let's focus on how you manage your own systems first.

Simplify and reduce

Dale: One of the things that we have to do at the scope or the size that we get to manage is that we have to simplify and reduce the amount of infrastructure. It’s really the amount of choices and configurations that are going on in our environment. Obviously, you won't find the complete set or suite of HP products in the portfolio that we are managing internally. We have to minimize how many different products we have.

One of the first things we had to do was simplify, so that we could scale to the size and scope that we have to manage. You have to find and simplify configuration and architecture as much as possible, so that you can continue to grow out scale.

Gardner: Lowell, what were some of the major challenges that you faced with those older backup systems? Tell me a bit more about this consolidation journey?

Dale: That's a good question as well. Some of the new technologies that we're evolving, such as virtual tape libraries, was one of the things that we had to figure out. What was the use case scenario for virtual tape? It's not easy to switch from old technology to something new and go 100 percent at it. So we had to take a step-wise approach on how we adopted virtual tape library and what we used it for.

We first started with a minimal amount of use cases and little by little, we started learning what that was really good for. We’ve evolved the use case even more, so that in our next generation design that will move forward. That’s just one example.

We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies.



Gardner: And that virtual tape is to replace physical tape. Is that right?

Dale: Yes, really to supplement physical tape. We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies. We'll facilitate that in some cases.

Gardner: You mentioned a little earlier on the whole issue of virtualization. You're servicing quite a bit more of that across the board, not just with applications, but storage and networks even.

Tell me a bit more about the issues of virtualization and how that provided a challenge to you, as you moved to these more consolidated and comprehensive storage and backup approaches?

Dale: One of the things with virtualization is that we saw something that we did with storage and utility storage. We made it such that it was much cheaper than before and easy to bring up. It had the "If you build it, they will come" effect. So, one of the things that we may end up seeing is an increase in the number of operating systems (OSs) or virtual machines (VMs) that we see out there. That's the opposite of the consolidation effect, where you have, say, 10 one-terabyte databases consolidated into one to reduce the overhead.

Scheduling overhead

With VMs increasing and the use case for virtualization increasing, one of the challenges is trying to work with scheduling overhead tasks. It could be anywhere from a backup to indexing to virus scanning and whatnot, and trying to find out what the limitations and the bottlenecks are across the entire ecosystem to find out when to run certain overhead and not impact production.

That’s one of the things that’s evolving. We are not there yet, but obviously we have to figure out how to get the data to the data protection pool. With virtualization, it just makes it a little bit more interesting.

Gardner: Lowell, given that your target is moving -- as you say, you're a fast growing company and the data is exploding -- how do you roll out something that is comprehensive and consolidating, but at the same time your target is moving object in terms of scale and growth?

Dale: I talked previously about how we have to standardize and simplify the architecture and the configuration, so that when it comes time to build that out, we can do it in mass.

For example, quite a few years ago, it used to take us quite a while to bring up a backup infrastructure that would facilitate that service need. Nowadays, we can bring up a fairly large scope environment, like an entire data center, within a matter of months if not weeks. This is how long it would take us. The process from there moves towards how we facilitate setting up backup policies and schedules, and even that’s evolving.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off.



Right now, we're looking at ideas and ways to automate that, so that' when a server plugs in, basically it’ll configure itself. We're not there yet, but we are looking at that. Some of the things that we’ve improved upon are how we build out quickly and then turn around and set up the configurations, as that business demand is then turned around and converted into backup demand, storage demand, and network demand. We’ve improved quite a bit on that front.

Gardner: And what version of Data Protector are you using now, and what are some of the more interesting or impactful features that are part of this latest release?

Dale: Data Protector 6.11 is the current release that we are running and deploying in our next generation. Some of the features with that release that are very helpful to us have to do with checkpoint recoveries.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off. This has helped us in multifold ways. If you have a bunch of data that you need to get backed up, you don’t want to start over, because it’s going to impact the next minute or the next hour of demand.

Not only that, but it’s also helped us be able to keep our backup success rates up and our tickets down. Instead of bringing a ticket to light for somebody to go look at it, it will attempt a few times for a checkpoint recovery. After so many attempts, then we’ll bring light to the issue so that someone would have to look at.

Gardner: With this emphasis on automation over the manual, tell us about the impact that’s had on your labor issues, and if you’ve been able to take people off of these manual processes and move them into some, perhaps more productive efforts.

Raising service level

Dale: What it’s enabled us to do is really bring our service level up. Not only that, but we're able to focus on other things that we weren’t able to focus on before. So one of the things is there’s a successful backup.

Being able to bring that backup success rate up is key. Some of the things that we’ve done with architecture and the product -- just the different ways for doing process -- has helped with that backup success rate.

The other thing that it's helped us do is that we’ve got a team now, which we didn’t have before, that’s just focused on analytics, looking at events before they become incidents.

I’ll use an analogy of a car that’s about to break-down, and the check-engine light comes on. We're able to go and look at that prior to the car's breaking down. So, we're getting a little bit further ahead. We're going further upstream to detect issues, before they actually impact our backup success rate or SLAs. Those are just a couple of examples there.

We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion



Gardner: How many people does it take to run these petabytes of recovery and backup through your next-generation data center. Just give us a sense of the manpower.

Dale: On backup and recovery in the media management side, we’ve got about 25 people total spread between engineering and operational activities. Basically, their focus is on the backup and recovery of the media management side.

Gardner: Let’s look at some examples. Can you describe a time when you’ve needed to do very quick or even precise recovery, and how did this overall architectural approach and consolidation efforts help you on that?

Dale: We’ve had several cases where we had to recover data and go back to the data protection pool. That happens monthly in fact. We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion.

But, we also find the service being used to do database refreshes. So, we’ll have these large databases that they need to make a copy of from production. They end up getting copied over to development or test.

This current technology we are using, the current configuration, with the virtual tape libraries and the archive blogs has really enabled us to get the data backed up quickly and restored quickly. That’s been exemplified several times with either database copying or database recoveries, when those few type of events do occur.

Gardner: I should think these are some very big deals, when you can deliver the recovered data back to your constituents, to your users. That probably makes their day.

Dale: Oh yes, it does save the bacon at the end of the day.

Gardner: Perhaps you could outline, in your thinking, the top handful of important challenges that Data Protector addresses for you at HP IT. What are the really important paybacks that you're getting?

Object copy

Dale: I’ve mentioned checkpoint recovery. There are also some the things that we’ve been able to use with object copy that’s allowed us to balance capacity between our virtual tape libraries and our physical tape libraries. In our first generation design, we had enough capacity on the virtual libraries inside the whole, a subset of the total data.

Data Protector has a very powerful feature called object copy. That allowed us to maintain our retention of data across two different products or technologies. So, object copy was another one that was very powerful.

There are also a couple of things around the ability to do the integration backups. In the past, we were using some technology that was very expensive in terms of using of disk space on our XPs, and using split-mirror backups. Now, we're using the online integrations for Oracle or SQL and we're also getting ready to add SharePoint and Microsoft Exchange.

Now, we're able to do online backups of these databases. Some of them are upwards of 23 terabytes. We're able to do that without any additional disk space and we're able to back that up without taking down the environment or having any downtime. That’s another thing that’s been very helpful with Data Protector.

Gardner: Lowell, before we wrap up, let's take a look into the future. Where do you see the trends pushing this now? I think we could safely say that there's going to still be more data coming down the pike. Are there any trends around cloud computing, mobile business intelligence, warehousing efforts, or real-time analysis that will have an impact on some of these products and processes?

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other.



Dale: With some of the evolving technologies and some of the things around cloud computing, at the end of the day, we'll still need to mitigate downtime, data loss, logical corruption, or anything that would jeopardize that business asset.

With cloud computing, if we're using the current technology today with peak base backup, we have to get the data copied over to a data protection pool. There still would be the same approach of trying to get that data. If there is anything to keep up with these emerging technologies, for example, maybe we approach data protection a little bit differently and spread the load out, so that it’s somewhat transparent.

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other. I mentioned virtualization earlier. Some of the tools with content-awareness and indexing has overhead associated with it.

I think you're going to start seeing these portfolio products talking to each other. They can schedule when to run their overhead function, so that they stay out of the way of production. It’s just a couple of challenges for us.

We're looking at new configurations and designs that consolidate our environment. So we're looking at reducing our environment from 50-75 percent just by redesigning our architecture and making available more resources that were tied up before. That's one goal that we're working on right now. We're deploying that design today.

And then, there's configuration and capacity management. This stuff is still evolving, so that we can manage the service level that we have today, keep that service level up, bring the capital down, and keep the people required to manage it down as well.

Gardner: Great. I'm afraid we're out of time. We've been focusing on the challenges and progress of conducting massive and comprehensive backups of enterprise-wide data and applications and systems. We've been joined by Lowell Dale, a technical architect in HP's IT organization. Thanks so much, Lowell.

Dale: Thank you, Dana.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 Conference in Washington DC. Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of HP-sponsored Software Universe live discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on the backing up a growing volume of enterprise data using HP Data Protector. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: