Thursday, June 17, 2010

HP's Bill Veghte on Managing Complexity Amid Converging IT 'Inflection Points'

Transcript of a BriefingsDirect podcast with HP's Executive Vice President Bill Veghte on managing change in IT as virtualization, cloud and mobility gain importance.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

Please join me now in welcoming Bill Veghte, Executive Vice President of the HP Software & Solutions group. Welcome to BriefingsDirect, Bill.

Bill Veghte: Great. Thanks, Dana.

Gardner: We've heard a lot here about how tough things are. We're used to hearing tough economy stories, but now we're hearing about tough management and complexity stories.

We're also hearing about inflection points. Could describe for me what you see right now in the IT business, as an inflection point or points, and how that relates or compares to some of the past game-changing times in the history of IT?

Veghte: Dana, I spend a lot of time out there with CIOs and IT professionals, and we're at two remarkable inflection points in our industry.

The first is in terms of how businesses are delivering IT, and that's on three dimensions. The first is virtualization. There's a lot of not only conversation, but moving workloads of application services to a virtualized environment. Look at the numbers. People say that over 25 percent of x86 server workloads are now virtualized, and that number looks like it's going to accelerate over the next couple of years.

Correspondingly, there's a heck of a lot of conversation around cloud. People wrap a lot up in that word, but many of the customers tell me they think of it as just another way of delivering experiences to their end-customers. And, in cloud there's platform, applications, and private versus public, but it's another choice point for CIOs and IT folks.

The final piece in terms of IT delivery is that there are a heck of a lot of mobile devices, over a billion mobile devices, accessing the Internet. With the advent of smartphones, a very rich viewing and consuming medium, people expect to have that information.

Those things are incredible tools and opportunities, whether you characterize it in a balance sheet, and moving from capital expenditure to operating expense, or whether you characterize it in anytime/anywhere information on your mobile device. But with that, it does bring more choice points and more complexity.

Breadth and depth

The other inflection point that I'd highlight, Dana, is the breadth and depth of data that’s being generated. You and I both know that digital information is doubling globally every 12 to 18 months. In the midst of all the digital photos or whatever, sometimes people lose track of the fact that 85 percent of that data resides in businesses. And the fastest growing part of that is in unstructured data.

Now, the most precious resource is your ability to take that data and translate it into actionable information. The companies and businesses that are able to do that have a real competitive advantage.

You can put that in the context of a specific business operation. If you're a pharmaceutical, how quickly can you bring a drug to market? You can characterize that in a financial services organization. Do you have better, quicker data and market movements?

You can characterize it in an IT . There's an enormous amount of IT information and data, but how do I parse it out to the things that are going to represent a service desk ticket, and can I automate that so I am not putting people in the middle?

When I think about it in a historic context, I'd highlight a couple of things. One is that we're going through the biggest change in IT delivery since client-server, because of the three delivery vehicle changes that I highlighted. That, in turn, is going to generate a very significant refresh in applications and services.

Given the complexity that I just characterized, there's a real opportunity to bring more of a portfolio approach to delivering those solutions to customers.



You don't have the time deadline in the same way that we did with Y2K, but the CIOs and IT and apps folks that I know, as the economy is recovering, are looking at their application and service portfolios and saying, "How am I going to refresh this to take advantage of these new and different delivery vehicles?"

Gardner: How does that relate to HP? You're relatively new to HP. You had a long and distinguished career at Microsoft, and you've been here for a little over a month or so. How do these inflection points and the opportunity that you perceive for HP come together? Perhaps you could fill in on what attracted you to HP.

Veghte: Sure, Dana. As I looked across the marketplace and at this inflection point, there are a couple of things that attracted me to HP. One, I think HP is uniquely positioned in the marketplace, because it has a great portfolio as a company, across not only services, but also hardware and software.

On the software side, there is a remarkable portfolio of assets within HP, across application development and quality to the operations side. Yet, given the complexity that I just characterized, there's a real opportunity to bring more of a portfolio approach to delivering those solutions to customers.

Doing remarkable things

The final piece that I would highlight is that I worked for many years with HP as a partner. Whether it be Todd Bradley, who I worked with around the Windows business, or Mark [Hurd], as the executive sponsor for the HP Partnership, when I was on the Microsoft side, they're a great group of people doing some remarkable things.

If you look at what that executive leadership team has done over the last couple of years with and for HP customers, it’s exciting to think what we can do over the next five or six years.

Gardner: Speaking of HP customers, they sure are here at Software Universe. There are thousands of what we can call hardcore HP folks. What are they telling you? What have you learned? What has surprised you in your interactions in the last few days?

Veghte: It's been a great Software Universe for us. Compared with years past, there is a degree of energy and optimism in customers that's very invigorating. I've been in back-to-back meetings. You walk in, and they are excited about the innovations that we're bringing into market.

We've had a variety of very exciting announcements, such as Business Service Management 9.0. Some of the announcements were around the ability to automate how you take a production environment and apply it into a text script.

I think that they're constructively challenging us to make sure that we have a set of tools that are effectively scaling into the most complex operating environments in IT in the world.



The areas that customers are highlighting are: "You've got a great portfolio. You're heading in the right direction. Keep that pedal down. Take advantage of the fact that you've got not only fantastic best-of-breed capabilities in individual areas, but that you've got this breadth of offerings. I'm going to evaluate you against my entire solution set."

It starts with the strategy. In fact, there was a great customer meeting this morning. The customer said, "Look, I use you in a bunch of different ways, and I think you've got a great product. Now, what I need you to do is step up and make sure that from strategy, to application, to operation you're delivering that cohesion for me. I see good steps, but I want to see you keep doing it."

I think that they're constructively challenging us to make sure that we have a set of tools that are effectively scaling into the most complex operating environments in IT in the world, and making sure that, as the additional complexity in delivery vehicles that I just highlighted come online, that we continue to make sure that we are scaling effectively to deliver for the customers.

For example, at Software Universe 2010, in the Business Service Management case we announced, not only will we be providing a near real-time dynamic view of IT, but we are doing it across virtualized and cloud implementations. I just came from the session, where we were demoing to 3,500 people the ability to display that information on a smartphone across a variety of platforms -- from BlackBerry to iPhone to a Sprint device.

Gardner: It seems like complexity is the common foe here. ... When we talk about virtualized workloads. And when we have a variety of sourcing options -- on-premises, off-premises cloud, private, colo, hosting -- and also complexity, as you point out, in the number of endpoints or different devices.

Perhaps customers are wondering how to stay up with this accelerating pace of complexity. How could we think about the role of IT? What does IT need to be thinking in terms of itself? How should it perceive of itself in the next few years, vis-à-vis this common sense of budding complexity?

Continuing to evolve

Veghte: Well, Dana, the thing I'd go back to is those two inflection points that I highlighted, because I think they're very important, when we think about the fact that the role of IT continues to evolve.

First, as an IT organization, I have more choices in terms of how I am delivering my application service for and with business. I increasingly become a service broker, because I'm looking across my applications and services and deciding with the business what’s the most cost effective and best way of delivering those experiences for the businesses.

Second is, and we've talked about this as an industry for a long time, the continuing blending of business and IT. A customer from a Fortune 5 company was in a meeting with me earlier this week. He's been in the industry for 25 years, a very sharp guy, and in a deep partnership with HP.

He said that this year there are more people from business operations coming to Software Universe than there are from IT operations. He said, the reality is that whether you talk about it in the context of PPM or application and service requirements, those two functions are intermingling. Given the software footprint and portfolio we have, it’s a wonderful opportunity, but that continues to accelerate.

The final piece that I would highlight is not a change, but a continuity. Even as IT has a broader set of choices, and the relationship with the businesses continue to intertwine more and more, they're not off the hook, when it comes to security or compliance or the availability and performance of the solutions that they are responsible for supporting and delivering for the business. So, it’s important to factor that even as we look ahead.

This has been illustrated time and time again. The most successful businesses have figured out how to constructively apply IT to run a business.



Gardner: Seeing this relationship between business and IT shift and change, dealing with complexity across variety of different levels, looking for that right analysis and information in that sea of data, where do you think the management, the definition of management goes?

Are we talking about an expanded definition of management or the role of IT? If you can manage IT, does that mean you can better manage the business? Is there a coming together of managing IT and managing a business?

Veghte: This has been illustrated time and time again. The most successful businesses have figured out how to constructively apply IT to run a business.

IT tools are at such a maturity and the experiences of IT with the customer experience are so intermingled. The CIO at Delta Air Lines was talking yesterday about her utilization of HP technologies and some of the remarkable projects that she's been through. You listen to that talk and you realize that the reservation system, the way I check in, and my experience with Delta Air Lines is commingled with what you and I would characterize as the IT experience.

It was a remarkable story about that interrelationship with the business, as they were not only dealing with the broad adversity of the business climate, but also were trying to merge with Northwest Airlines.

Gardner: Perhaps we could go as far as to say that for many business over time, IT is the business.

Veghte: Dana, the trick in that is that IT means many different things to many people. The thing I would highlight is that IT has the ability to continue to outsource a variety of baseline capabilities. With that outsourcing capability, as an industry, IT providers, are going to be able to provide more and more. And that gives IT the ability to move up the stack in terms of higher value-add applications and services, and then the business runs through and with IT.

Gardner: So, maybe we could expand it to say, managing the services through IT is the business -- or some combination of the service model?

Veghte: You're a smarter analyst than I am. All I know is that the intersection between the two -- and the resulting customer experience -- continues to accelerate. We look forward, as part of HP Software & Solutions, to playing a great role in helping customers deliver those solutions and those experiences.

Gardner: Well, great. Thank you. We've been talking with Bill Veghte, Executive Vice President of HP Software & Solutions. Thank you so much, Bill.

Veghte: Great. Thank you, Dana.

Gardner: And, we want to thank our audience for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Washington. Look for other podcasts from this HP event on the hp.com website, as well as via the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast with HP's Executive Vice President Bill Veghte on managing change in IT as virtualization, cloud and mobility gain importance. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, June 15, 2010

HP's Robin Purohit Unpacks Business Service Management 9 as Way to Address Complexity in Hybrid Data Centers

Transcript of a BriefingsDirect podcast with HP's Software Products General Manager Robin Purohit on managing software and services in an increasingly chaotic environment.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

We're now joined by Robin Purohit, Vice President and General Manager of the Software Products Business Unit for HP Software & Solutions. Welcome back to BriefingsDirect, Robin.

Robin Purohit: Great to talk to you again, Dana.

Gardner: You know we're seeing a lot of changes in the market, and this whole notion of management to me seems to be exploding. There are just so many moving parts. People in these larger organizations are expected to manage software and services, manage them on- and off-premises, and also managing more up and down the development-to-operations life-cycle. Tell me how this complexity is being handled. How are enterprises beginning to adjust to this larger scope and definition of management?

Purohit: Our customers are dealing with some of the most significant combination of changes in IT technologies and paradigms that I had ever seen. There's a whole new way of developing applications like Agile development, the real acceleration of virtualization from your desktop and test environments to production workloads, and all the evaluations of where the cloud and software as a service (SaaS) fits and how that supports the enterprise applications.

All these things and more are colliding at once. What customers are saying is, "How do we take advantage of these new technology shifts and new ways of dealing with technology to get dramatic impact in cost, but not increase the risk? How can I do things faster and cheaper, but do things right?"

What they're looking for is a way to somehow simplify and automate the use of all these technologies and their processes using management software, so they get the most they can of these new paradigm shifts, while they keep up what the business wants them to do.

Gardner: Depending on the type of organization, within enterprises, within different units, there seems an emphasis on, "Let's try to use public clouds as best we can." Other are saying, "No, we really want to focus on building a private or on-premises cloud capability." Surely, hybrids are growing more common in many different permutations across these different organizations, and within them. From your perspective, how important is addressing the management issue about hybrid computing?

Purohit: First of all, I’d say that, compared to a year ago, the active interest of our clients in cloud computing has just exploded. Last year was a curiosity for many senior IT executives. It was something on the horizon, but this year there's really an active evaluation.

Most customers are looking initially at something a little safer, meaning a private cloud approach, where there is either new stack of infrastructure and applications are run for them by somebody else on their site or at some off-site operation. That seems to be the predominant paradigm.

Piece of a puzzle

The challenge is that that set of cloud services, that private service, is really just a piece of a puzzle to run their business operation. It's usually slice of infrastructure or certain class of application that’s part of the larger critical business service for their company.

What they have to do is to figure out how to take advantage of that, target the right workload where it's okay to take that risk, select the right partner, but then make sure that all of the instrumentation of making sure they are getting what they wanted out of it is actually integrated with the rest of their operation. Otherwise, it's just another thing to manage.

Gardner: And, while they can control many of the aspects of these applications and data-sets in an on-premises or private cloud, they lose some of that control when they move outside. Perhaps management and governance will be the common bridge that allows them to feel that the risk is manageable.

Purohit: That’s right. Now, the big risk is that the business is moving so fast. They read all the articles in BusinessWeek and The Economist and ask their IT guys, "Why can't we do this?" They actually want IT to move faster to public cloud computing.

So, the challenge for IT is how they enable that level of innovation at the right pace, but make sure that it's all very well governed -- simple things like getting what we pay for in terms of performance, capability, and the capacity.

That whole notion of cloud governance is one of the most critical things to get right near term, so that the IT guys can keep up with the business guys.



We're sourcing some sort of elastic-like services. And, by the way, is that environment secure, so we're not putting the business at risk? Then, if I want to change to a different cloud provider or another private provider, how do I do that in a fairly nimble way, without having to re-architect everything that I have done.

That whole notion of cloud governance is one of the most critical things to get right near-term, so that the IT guys can keep up with the business guys, but all the risk is there, it's still going be on the line.

Gardner: As you mentioned earlier, Robin, the pressure on cost is still very high. Do you foresee that managing these issues about control and risk will also, at some point, help define, analyze, and ultimately control and reduce the total costs? Or are these even the same types of problems?

Purohit: It's important to look at where the costs are coming from. There are two really big cost drains in IT. One is that when they roll out your applications, the majority of the time, things don't work well when they initially roll something out. If you think of Agile and the pace at which now new application innovations are bring rolled out, it really means that you have to get things right the first time. The first thing is to tackle that problem, as you go to these hybrid models.

Chaotic environment

The second thing is that most companies still are trying to get a handle on a right way of simplifying and automating their operation in a very chaotic environment. A typical data center is dealing with 900 changes a month. They might get a million incidents over a couple months and each one of those incidents could cost up to $80 plus labor. So, you can just imagine how chaotic and expensive it is just to run day-by-day.

It's really critical for both, the hand-off of the applications to operation, as well as this running the daily operations. This is incredibly automated and simplified and all focused on the impact of all these things that the business wanted in the first place.

If we do that right and then make extensions into all of these same core processes to accommodate a SaaS model, a private cloud model, or even ultimately a public cloud model, without having to change all of that, you are going to be able to bridge from today to the future. You'll be getting all that benefit and actually keep reducing your cost, because you want to keep doing this innovation in a sustainable way.

Gardner: So, we’re into this environment of change and complexity and the pressure to gain control, reduce risk, and control cost. HP today has announced Business Service Management 9 (BSM9). Give us an overview of how that all shapes up and relates to this environment we have been discussing.

All of that together -- knowing how it's connected, what the health of it is, what's changing in it so, you can actually make sure it's all running exactly the way the business expects -- is really critical.



Purohit: Absolutely. This has been a great release, and we're incredibly proud of it. BSM 9 is our solution for end-to-end monitoring of services in the data center. It's been a great business for us, and we have a break-through release that we revealed to our customers this week.

It's anchored on what we call the runtime service model. A service model is basically a real-time map of everything from the business transactions of the businesses running to all of the software that makes up that composite applications for the service, and all of the infrastructure -- whether it be physical or virtual, on-premise or off-premise -- that supports all of that application.

All of that together -- knowing how it's connected, what the health of it is, what's changing in it so, you can actually make sure it's all running exactly the way the business expects -- is really critical.

If you can imagine what we've talked about with virtualization and the rate of change there, people optimizing virtual workloads, new application coming on as being fired in the data center with Agile and maybe some outsourced environments and private/public clouds, that service model better be real time and up to date all the time.

That’s the real break through. Before we had a service model that was really linked to the configuration that we thought was running. Now that they have everything up to date in real time with all of this increased velocity, it's really critical.

So, we've rolled that out, and it's now the backbone for all of our end-to-end monitoring. The other thing I'd stress is that once you have that, especially, in this very fast-paced environment, you can really increase the levels of automation.

What you've seen before

When you detect an event, making sure you know exactly what was going on at the time of the event, you can help people diagnose it and probably help solve it, because most of the times these things are based on what you've seen before.

We've taken all of our world-class automation technology, wrapped right into this end-to-end monitoring solution to automate everything possible. We think this can drive dramatic reduction in the cost of operations.

The last thing, I’d emphasize is that there are a lot of people involved in solving these problems, and running these operations. What’s important is that all of them have a very personalized UI that looks and feels like a modern application, but os all based on one version of the truth of what’s going on. We made major improvements, just overhauling the way all of this is presented in a very rich Web 2.0 way, but also in a way that’s targeted to the needs of every single user in operations.

Gardner: So, you’ve announced some software, some services, this collaboration issue, and some new partnerships developing the ecosystem as well. It sounds as if you are allowing more variability in this run-time environment, but creating more commonality in the management layer or capability and then extending that both outward to the environment then letting the automation, then come back in, in terms of self-management. Is that a fair assessment?

Purohit: Right. That’s the trick. Again, there are a lot of people involved in running these business-critical services. You want to personalize it for them but you also want to simplify what you provide to them and make sure all is accurate in real-time. We try to solve both problems at once, the simplification of the user-experience and making sure that their criticality what we are showing them is all is up-to-date and accurate.

There are a lot of people involved in running these business-critical services. You want to personalize it for them but you also want to simplify what you provide to them and make sure all is accurate in real-time.



Gardner: What has been some of the response? What are you hearing from the customers, from the folks you are talking to? Do they seem to feel that the solution set that you're providing is aligned with their problems?

Purohit: Absolutely. We had a couple of really great customers speak to the solutions this week. Boeing is a big customer, and has actually has been a longtime user of BSM from us. They've gotten some massive improvements in their service level agreements (SLAs). They basically were in a condition red. Now, they're well over 98 or 99 percent of SLAs, and they've been able to save over the previous solution more than $1 million dollars in cost and have seen reductions in repair time from 10 hours to 1 hour.

That’s even at the current solution. What they have told us as part of our Beta program is that this is going to take it to a whole other level. I can’t quantify the impact, but they're going to see an ability to take on these new technologies and all these great gains that they had with the previous release are just going to get better and better.

That was a great success, and we also had Sprint on stage with us to talk to our customers about their evaluation of the product, and they're incredibly excited. You can imagine that telcos have all sorts of pressures on both cost and agility right now in a highly competitive environment.

Customers like Sprint could have a very dynamic experience. We can run part of this for them with our SaaS offering and they can monitor internally. Or, they can have a cloud provider running part of their network or business partner, and they really don’t know how to change the way that they are going to operate. So we think that our customers’ validity is a huge step-forward.

Gardner: These are two different types of customer, an enterprise and a service provider. As HP helps the cloud providers build out their clouds, if there is a common approach methodology, understanding even culture around the management both in the enterprises and the cloud providers, doesn’t that provide really a whole greater than the sum of the parts, when it comes to managing the entire lifecycle?

Adapt and morph

Purohit: That's right. What we haven’t touched on too much on is that what we're trying to do in HP is not just worry about the data center. We're trying to help customers really adapt and morph these applications into the new world. Most customers are shifting their IT focus on innovating around the application. This means that more of their people who they have internally are creating new IT in the form of a new application for a sales person or new customer-facing portal. That’s going to drive more revenue.

What we're really trying to do is to help them bridge that world, which is very innovation centric, into this new hybrid world, which has to be very operationally tight. A couple of things that we have also announced this week have gotten great feedback. One is a new capability called Application Deployment Manager, which is basically an extension to our industry relating automation capabilities.

That’s going to allow us to bridge all that upstream innovation, make sure it’s designed and tested correctly, then hand it off in an automated way into production, and run on these new optimized hybrid environment.



It really allows development, QA, and operations to coordinate hand-offs of applications in a very well prescribed way, so that they can make sure that what they designed gets handed off and rolled out into the production environment in a very crisp automated way and a way that represents the best practices and everything that’s been learned in a QA cycle. That was a big step forward.

We've also worked up-stream. We've extended our Quality Management Solution to tackle the requirements problem, linking business developers and QA together, and opened up that environment, so that it's much easier to integrate with source code management tools and development tools from folks like CollabNet.

CollabNet is one of the industry's leading development tools providers. As announced, we've integrated with that new open interface. We also support any software out there in that environment. That’s going to allow us to bridge all that upstream innovation, make sure it’s designed and tested correctly, then hand it off in an automated way into production, and run on these new optimized hybrid environment. So, we really are talking about the whole problem, which is really the thing that our customers are most excited about.

Gardner: Now, Robin, you've been involved with these issues for some time. I remember back not too long ago, when just getting visibility into a distributed computing environment that you completely controlled, was considered a very big deal.

How important is this to you personally -- this notion of being able to gain visibility, apply management, and then automate?

Purohit: For me, this release is an extremely proud moment. This has been the vision that we've had for some time, particularly around BSM 9, being able to bring all these points of monitoring information together into a simple, powerful way to solve those big business problem. What’s changed though is that the necessity to do that now in this new, rapidly changing environment. So, all this new technology becomes even more important to our customers.

For us, particularly, BSM 9 is vision being turned into reality at just the right time for our customers. That’s really the most exciting thing for me about what we did this week.

Gardner: Well, thank you so much for joining us. We've been talking about HP BSM 9.0 with Robin Purohit, Vice President and General Manager of the Software Products Business Unit for HP Software & Solutions. I know you've been very busy here at the show. I appreciate your input and good luck.

Purohit: Alright, Dana, thanks again.

Gardner: And thanks for you to our audience for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Washington D.C.

Look for other podcasts in this HP event series on the hp.com website, as well as via the BriefingsDirect Network. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP's Software Products General Manager Robin Purohit on managing software and services in an increasingly chaotic environment. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

HP's Anton Knolmar on Seeking Innovations to Enterprise IT Challenges at Software Universe Conference

Transcript of a BriefingsDirect podcast with HP's Anton Knolmar on the innovations and customer outreach from software conference in Washington, DC.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

We're now joined by Anton Knolmar, Vice President of Marketing for HP Software & Solutions. Welcome to the podcast, Anton.

Anton Knolmar: Hi, Dana.

Gardner: You're welcoming folks to the show, there’s a very big crowd here in Washington, in a new facility. Tell us a little bit about what is exciting and new for our attendees and why this is a particularly big event for HP.

Knolmar: HP Software Universe has quite a history. It’s not the first time that we're running this. A big thing for this year, especially for Americans, is that we're moving out of Las Vegas, where we were for the last couple of years, to Washington.

First, we wanted to get a different staging and we wanted to attract our public-sector customers. That’s an important thing for us, because we do a lot of business here. That’s one of the reasons we've moved to Washington.

We're really excited, as we have a fully packed agenda from Tuesday until Friday with mainstage sessions and 200 track sessions. We have a dedicated Executive Track and we have a combined solution showcase, where we have our HP product experts, our service professionals, and our key partners together.

We have awards of excellence and partner summits. So, we're really trying to get the entire ecosystem that has already deployed our solutions, from a customer and a partner perspective here, but also prospects in Washington. I'm happy to tell you that all the hotel rooms are booked. So, we're fully packed in the middle of Washington to try to bring across the best of what we have from our Software and Solutions portfolio.

Gardner: And, this event is happening, I think, at a fairly auspicious time. There’s a lot going on in the industry. Many people are referring to it as an inflection point. Why do you think this is an important juncture in the evolution of IT and business?

Knolmar: As you said, there are a lot of things going on at the moment around virtualization, cloud, outsourcing, on-premise, off-premise. I think that over the next five or six years, there will be an even greater disruptions in how organizations adopt and use technology.

If you look around, there are a couple of facts which are really critical. There will be a 100 percent increase in the number of virtual machines from 2009 to 2012 and a 43 percent growth in virtualized applications. Mobile devices will grow even more.

A lot of things are happening at the moment around those different areas, around mobile devices, cloud, and virtualization, as well as around the information explosion for the foreseeable future.

Ahead of the game

D
efinitely, from an HP and from an HP Software & Solutions perspective, we want to be ahead of this game and provide the appropriate level for our customers, so that they can be future ready with whatever we provide them. They can deploy it. They can deploy it better. They can deploy it more simply. They can integrate more simply. And, they can use this stuff to give them, our customers, a competitive advantage against their competitors.

Gardner: One of the things I hear a lot in the field is a concern about how to control and manage the complexity that’s going on. As I look at various new sourcing opportunities, perhaps adopting some cloud models themselves, they're worried about the issue of governance. Is that another big part of the picture today?

Knolmar: It's definitely a big part of the picture, because technologies like virtualization and cloud, which we just discussed, represent the biggest disruption in the technology environment since client-server.

But, unlike client-server, the entire enterprise is not just going on one service delivery method or another. We believe that enterprises will have hybrid technologies, as well as a hybrid application environment.

This hybrid environment will be created from enterprise sourcing services from a variety of service delivery models. It will require a set of tools that can manage the service irrespective of where it comes from, either from in-house, physical, virtual, outsource, or via the cloud.

These new delivery models will be directly related to how they can manage and how they can automate them, irrespective of where they are sourced or where they are running.



The ability to benefit from these advances is where our customers are struggling. These new delivery models will be directly related to how they can manage and how they can automate them, irrespective of where they are sourced or where they are running.

That’s one key piece of our announcement. What we can get across from this event in Washington is to explain our customers how we can help them to speed up time-to-innovation by reducing the risk. On the other hand, they can get ready by building a management environment that is ready for the next big thing.

Also, we can explain to our customers how we can simplify, integrate, and automate to gain, as I mentioned before, a competitive advantage from the new technologies. What’s clear to everyone is that one size fits no one. So, enterprises will need to have multiple sourcing options for their applications.

Gardner: Of course, to that same thought, community effect is quite prominent nowadays. Events like this certainly give people opportunity to get together, do some brainstorming, compare notes, and learn from each other. So, we're certainly looking forward to these mainstage events and hearing the news from HP. Tell me, if you could, how these events tend to enliven the community itself.

Knolmar: I think what those events offer to us is the two-pronged approach that we're seeing at the moment. Definitely, it’s our biggest user gathering, and what we're trying to do is get live customers together face-to-face. But, that’s only one piece.

What we're also doing for the first time is web streaming our content to different parts of the world, so that we really can reach out much more broadly. We're also building up a kind of HP Software & Solutions community. We have other ways of doing this and are using the social media capabilities as well. [Search for conference goings-on at Twitter on #HPSWU.]

We're connecting our customer community in a better way to bring those pieces together, even the different persona levels. Basically, we're drilling down from a CIO level, via the VP or IT manager, down to the other areas, where we have more of a practitioner level.

For this show specifically, people can even follow us on Twitter and on Facebook. It’s really a big thing for us to be investing in these kinds of new areas and reaching out as broadly as we can do here to the different target audiences and using all these new capabilities which are out there.

Gardner: Well, great. I'm certainly looking forward to hearing more about the show as it unfolds over the next several days. I want to thank you for joining us. We've been here with Anton Knolmar, Vice President of Marketing for HP Software & Solutions, learning about what to expect at the Software Universe Conference. Thank you, Anton.

Knolmar: Thank you.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Washington, DC. Look for other podcasts from this HP event on the hp.com website, as well as via the BriefingsDirect network.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this series of HP sponsored Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast with HP's Anton Knolmar on the innovations and customer outreach from software conference in Washington, DC. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

HP Data Protector, a Case Study on Scale and Completeness for Total Enterprise Data Backup and Recovery

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on backing up a growing volume of enterprise data using HP Data Protector.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Washington, DC. We're here the week of June 14, 2010 to explore some major enterprise software and solutions trends and innovations making news across HP's ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Software Universe Live Discussions.

Our topic for this conversation focuses on the challenges and progress in conducting massive and comprehensive backups of enterprise live data, applications, and systems. We'll take a look at how HP Data Protector is managing and safeguarding petabytes of storage per week across HP's next-generation data centers.

The case-study sheds light on how enterprises can consolidate their storage and backup efforts to improve response and recovery times ,while also reducing total costs.

To learn more about high-performance enterprise scale storage and reliable backup, please join me in welcoming Lowell Dale, a technical architect in HP's IT organization. Welcome to BriefingsDirect, Lowell.

Lowell Dale: Thank you, Dana.

Gardner: Lowell, tell me a little bit about the challenges that we're now facing. It seems that we have ever more storage and requirements around compliance and regulations, as well as the need to cut cost. Maybe you could just paint a picture for me of the environment that your storage and backup efforts are involved with.

Dale: One of the things that everyone is dealing with these days is pretty common and that's the growth of data. Although we have a lot of technologies out there that are evolving -- virtualization and the globalization effect with running business and commerce across the globe -- what we're dealing with on the backup and recovery side is an aggregate amount of data that's just growing year after year.

Some of the things that we're running into are the effects of consolidation. For example, we end up trying to backup databases that are getting larger and larger. Some of the applications and servers that consolidate will end up being more of a challenge for some of the services such as backup and recovery. It's pretty common across the industry.

In our environment, we're running about 93,000-95,000 backups per week with an aggregate data volume of about 4 petabytes of backup data and 53,000 run-time hours. That's about 17,000 servers worth of backup across 14 petabytes of storage.

Gardner: Tell me a bit about applications. Is this a comprehensive portfolio? Do you do triage and take some apps and not others? How do you manage what to do with them and when?

Slew of applications

Dale: It's pretty much every application that HP's business is run upon. It doesn’t matter if it's enterprise warehousing or data warehousing or if it's internal things like payroll or web-facing front-ends like hp.com. It's the whole slew of applications that we have to manage.

Gardner: Tell me what the majority of these applications consist of.

Dale: Some of the larger data warehouses we have are built upon SAP and Oracle. You've got SQL databases and Microsoft Exchange. There are all kinds of web front-ends, whether it’s with Microsoft, IIS, or any type of Apache. There are things like SharePoint Portal Services, of course, that have database back-ends that we back up as well. Those are just a few that come to mind.

Gardner: What are the major storage technologies that you are focusing on that you are directing at this fairly massive and distributed problem?

Dale: The storage technologies are managed across two different teams. We have a storage-focused team that manages the storage technologies. They're currently using HP Surestore XP Disk Array and EVA as well. We have our Fibre Channel networks in front of those. In the team that I work on, we're responsible for the backup and recovery of the data on that storage infrastructure.

We're using the Virtual Library Systems that HP manufactures as well as the Enterprise System Libraries (ESL). Those are two predominant storage technologies for getting data to the data protection pool.

Gardner: One of the other trends, I suppose, nowadays is that backup and recovery cycles are happening more frequently. Do you have a policy or a certain frequency that you are focused on, and is that changing?

As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.



Dale: That's an interesting question, because often times, you'll see some induced behavior. For example, we back up archive logs for databases, and often, we'll see a large increase in those. As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.

You can say the same thing about any transactional type of application, whether it's messaging, which is Exchange with the database, with transactional logs, SQL, or Oracle.

So, we see an increase in backup frequency around logs to not only mitigate disk space constraints but to also mitigate our RTO, or RPO I should say, and how much data they can afford to lose if something should occur like logical corruption or something akin to that.

Gardner: Let's take a step back and focus on the historical lead-up to this current situation. It's clear that HP has had a lot of mergers and acquisitions over the past 10 years or so. That must have involved a lot of different systems and a lot of distribution of redundancy. How did you start working through that to get to a more comprehensive approach that you are now using?

Dale: Well, if I understand your question, you're talking about the effect of us taking on additional IT in consolidating, or are you talking about from product standpoint as well?

Gardner: No, mostly on your internal efforts. I know there's been a lot of product activities as well, but let's focus on how you manage your own systems first.

Simplify and reduce

Dale: One of the things that we have to do at the scope or the size that we get to manage is that we have to simplify and reduce the amount of infrastructure. It’s really the amount of choices and configurations that are going on in our environment. Obviously, you won't find the complete set or suite of HP products in the portfolio that we are managing internally. We have to minimize how many different products we have.

One of the first things we had to do was simplify, so that we could scale to the size and scope that we have to manage. You have to find and simplify configuration and architecture as much as possible, so that you can continue to grow out scale.

Gardner: Lowell, what were some of the major challenges that you faced with those older backup systems? Tell me a bit more about this consolidation journey?

Dale: That's a good question as well. Some of the new technologies that we're evolving, such as virtual tape libraries, was one of the things that we had to figure out. What was the use case scenario for virtual tape? It's not easy to switch from old technology to something new and go 100 percent at it. So we had to take a step-wise approach on how we adopted virtual tape library and what we used it for.

We first started with a minimal amount of use cases and little by little, we started learning what that was really good for. We’ve evolved the use case even more, so that in our next generation design that will move forward. That’s just one example.

We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies.



Gardner: And that virtual tape is to replace physical tape. Is that right?

Dale: Yes, really to supplement physical tape. We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies. We'll facilitate that in some cases.

Gardner: You mentioned a little earlier on the whole issue of virtualization. You're servicing quite a bit more of that across the board, not just with applications, but storage and networks even.

Tell me a bit more about the issues of virtualization and how that provided a challenge to you, as you moved to these more consolidated and comprehensive storage and backup approaches?

Dale: One of the things with virtualization is that we saw something that we did with storage and utility storage. We made it such that it was much cheaper than before and easy to bring up. It had the "If you build it, they will come" effect. So, one of the things that we may end up seeing is an increase in the number of operating systems (OSs) or virtual machines (VMs) that we see out there. That's the opposite of the consolidation effect, where you have, say, 10 one-terabyte databases consolidated into one to reduce the overhead.

Scheduling overhead

With VMs increasing and the use case for virtualization increasing, one of the challenges is trying to work with scheduling overhead tasks. It could be anywhere from a backup to indexing to virus scanning and whatnot, and trying to find out what the limitations and the bottlenecks are across the entire ecosystem to find out when to run certain overhead and not impact production.

That’s one of the things that’s evolving. We are not there yet, but obviously we have to figure out how to get the data to the data protection pool. With virtualization, it just makes it a little bit more interesting.

Gardner: Lowell, given that your target is moving -- as you say, you're a fast growing company and the data is exploding -- how do you roll out something that is comprehensive and consolidating, but at the same time your target is moving object in terms of scale and growth?

Dale: I talked previously about how we have to standardize and simplify the architecture and the configuration, so that when it comes time to build that out, we can do it in mass.

For example, quite a few years ago, it used to take us quite a while to bring up a backup infrastructure that would facilitate that service need. Nowadays, we can bring up a fairly large scope environment, like an entire data center, within a matter of months if not weeks. This is how long it would take us. The process from there moves towards how we facilitate setting up backup policies and schedules, and even that’s evolving.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off.



Right now, we're looking at ideas and ways to automate that, so that' when a server plugs in, basically it’ll configure itself. We're not there yet, but we are looking at that. Some of the things that we’ve improved upon are how we build out quickly and then turn around and set up the configurations, as that business demand is then turned around and converted into backup demand, storage demand, and network demand. We’ve improved quite a bit on that front.

Gardner: And what version of Data Protector are you using now, and what are some of the more interesting or impactful features that are part of this latest release?

Dale: Data Protector 6.11 is the current release that we are running and deploying in our next generation. Some of the features with that release that are very helpful to us have to do with checkpoint recoveries.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off. This has helped us in multifold ways. If you have a bunch of data that you need to get backed up, you don’t want to start over, because it’s going to impact the next minute or the next hour of demand.

Not only that, but it’s also helped us be able to keep our backup success rates up and our tickets down. Instead of bringing a ticket to light for somebody to go look at it, it will attempt a few times for a checkpoint recovery. After so many attempts, then we’ll bring light to the issue so that someone would have to look at.

Gardner: With this emphasis on automation over the manual, tell us about the impact that’s had on your labor issues, and if you’ve been able to take people off of these manual processes and move them into some, perhaps more productive efforts.

Raising service level

Dale: What it’s enabled us to do is really bring our service level up. Not only that, but we're able to focus on other things that we weren’t able to focus on before. So one of the things is there’s a successful backup.

Being able to bring that backup success rate up is key. Some of the things that we’ve done with architecture and the product -- just the different ways for doing process -- has helped with that backup success rate.

The other thing that it's helped us do is that we’ve got a team now, which we didn’t have before, that’s just focused on analytics, looking at events before they become incidents.

I’ll use an analogy of a car that’s about to break-down, and the check-engine light comes on. We're able to go and look at that prior to the car's breaking down. So, we're getting a little bit further ahead. We're going further upstream to detect issues, before they actually impact our backup success rate or SLAs. Those are just a couple of examples there.

We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion



Gardner: How many people does it take to run these petabytes of recovery and backup through your next-generation data center. Just give us a sense of the manpower.

Dale: On backup and recovery in the media management side, we’ve got about 25 people total spread between engineering and operational activities. Basically, their focus is on the backup and recovery of the media management side.

Gardner: Let’s look at some examples. Can you describe a time when you’ve needed to do very quick or even precise recovery, and how did this overall architectural approach and consolidation efforts help you on that?

Dale: We’ve had several cases where we had to recover data and go back to the data protection pool. That happens monthly in fact. We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion.

But, we also find the service being used to do database refreshes. So, we’ll have these large databases that they need to make a copy of from production. They end up getting copied over to development or test.

This current technology we are using, the current configuration, with the virtual tape libraries and the archive blogs has really enabled us to get the data backed up quickly and restored quickly. That’s been exemplified several times with either database copying or database recoveries, when those few type of events do occur.

Gardner: I should think these are some very big deals, when you can deliver the recovered data back to your constituents, to your users. That probably makes their day.

Dale: Oh yes, it does save the bacon at the end of the day.

Gardner: Perhaps you could outline, in your thinking, the top handful of important challenges that Data Protector addresses for you at HP IT. What are the really important paybacks that you're getting?

Object copy

Dale: I’ve mentioned checkpoint recovery. There are also some the things that we’ve been able to use with object copy that’s allowed us to balance capacity between our virtual tape libraries and our physical tape libraries. In our first generation design, we had enough capacity on the virtual libraries inside the whole, a subset of the total data.

Data Protector has a very powerful feature called object copy. That allowed us to maintain our retention of data across two different products or technologies. So, object copy was another one that was very powerful.

There are also a couple of things around the ability to do the integration backups. In the past, we were using some technology that was very expensive in terms of using of disk space on our XPs, and using split-mirror backups. Now, we're using the online integrations for Oracle or SQL and we're also getting ready to add SharePoint and Microsoft Exchange.

Now, we're able to do online backups of these databases. Some of them are upwards of 23 terabytes. We're able to do that without any additional disk space and we're able to back that up without taking down the environment or having any downtime. That’s another thing that’s been very helpful with Data Protector.

Gardner: Lowell, before we wrap up, let's take a look into the future. Where do you see the trends pushing this now? I think we could safely say that there's going to still be more data coming down the pike. Are there any trends around cloud computing, mobile business intelligence, warehousing efforts, or real-time analysis that will have an impact on some of these products and processes?

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other.



Dale: With some of the evolving technologies and some of the things around cloud computing, at the end of the day, we'll still need to mitigate downtime, data loss, logical corruption, or anything that would jeopardize that business asset.

With cloud computing, if we're using the current technology today with peak base backup, we have to get the data copied over to a data protection pool. There still would be the same approach of trying to get that data. If there is anything to keep up with these emerging technologies, for example, maybe we approach data protection a little bit differently and spread the load out, so that it’s somewhat transparent.

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other. I mentioned virtualization earlier. Some of the tools with content-awareness and indexing has overhead associated with it.

I think you're going to start seeing these portfolio products talking to each other. They can schedule when to run their overhead function, so that they stay out of the way of production. It’s just a couple of challenges for us.

We're looking at new configurations and designs that consolidate our environment. So we're looking at reducing our environment from 50-75 percent just by redesigning our architecture and making available more resources that were tied up before. That's one goal that we're working on right now. We're deploying that design today.

And then, there's configuration and capacity management. This stuff is still evolving, so that we can manage the service level that we have today, keep that service level up, bring the capital down, and keep the people required to manage it down as well.

Gardner: Great. I'm afraid we're out of time. We've been focusing on the challenges and progress of conducting massive and comprehensive backups of enterprise-wide data and applications and systems. We've been joined by Lowell Dale, a technical architect in HP's IT organization. Thanks so much, Lowell.

Dale: Thank you, Dana.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 Conference in Washington DC. Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of HP-sponsored Software Universe live discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on the backing up a growing volume of enterprise data using HP Data Protector. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Delta Air Lines Improves Customer Self-Service Apps Quickly Using Quality Assurance Tools

Transcript of a BriefingsDirect podcast with Delta Air Lines development leaders on gaining visibility into application testing to improve customer self-service experience.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

Our customer case study today focuses on Delta Air Lines and the use of HP quality assurance products for requirements management as well as mapping the test cases and moving into full production. We are here with David Moses, Manager of Quality Assurance for Delta.com and its self service efforts. Thanks for joining us, David.

David Moses: Thank you, very much. Glad to be here.

Gardner: We're also here with John Bell, a Senior Test Engineer at Delta. Welcome John.

John Bell: Thank you.

Gardner: Tell me about the market drivers. What is the problem set when it comes to managing the development process around requirements and then quality and test out through your production? What are the problems that you're generally facing these days?

Moses: Generally, the airline industry, along with the lot of other industries I'm sure, is highly competitive. We have a very, very quick, fast-to-market type environment, where we've got to get products out to our customers. We have a lot of innovation that's being worked on in the industry and a lot of competing channels outside the airline industry that would also like to get at the same customer set. So, it's very important to be able to deliver the best products you can as quickly as possible. "Speed Wins" is our motto.

Gardner: What is it about the use of some of the quality assurance products that helps you pull off that dual trick of speed, but also reliability and high quality?

Moses: The one thing I really like about the HP Quality Center suite especially is that your entire software development cycle can live within that tool. Whenever you're using different tools to do different things, it becomes a little bit more difficult to get the data from one point to another. It becomes a little bit more difficult to pull reports and figure out where you can improve.

Data in one place

What you really want to do is get all your data in one place and Quality Center allows you to do that. We put our requirements in in the beginning. By having those in the system, we can then map to those with our test cases, after we build those in the testing phase.

Not only do we have the QA engineers working on it in Quality Center, we also have the business analysts working on it, whenever they're doing the requirements. That also helps the two groups work together a bit more closely.

Gardner: Do you have anything to add to that, John?

Bell: The one thing that's been very helpful is the way that the Quality Center tabs are set up. It allows us to follow a specific process, looking at the release level all the way down to the actual cycles, and that allows us to manage it.

It's very nice that Quality Center has it all tied into one unit. So, as we go through our processes, we're able to go from tab to tab and we know that all of that information is interconnected. We can ultimately trace a defect back to a specific cycle or a specific test case, all the way back to our requirement. So, the tool is very helpful in keeping all of the information in one area, while still maintaining the consistent process.

Gardner: Can you give us a sense of how much activity you process or how many applications there are -- the size of the workload you’ve got these days?

Bell: There is a lot. I look back to metrics we pulled for 2008. We were doing fewer than 70 projects. By 2009, after we had fully integrated Quality Center, we did over 129 projects. That also included a lot of extra work, which you may have heard about us doing related to a merger.

Gardner: With that increase in the number of applications that you're managing and dealing with, did you have any metrics in terms of the quality that you were able to manage, even though that volume increased so dramatically?

Moses: We were able to do that. That's one of the nice things. You can use your dashboard in Quality Center to pull those metrics up and see those reports. You can point out the projects that were your most troublesome children and look at the projects where you did really well.

Best-case scenario

You can go back and do a best-case scenario, and see what you did great and what you could improve. Having that view into it really helps. It’s also beneficial, whenever you have another project similar to one that was such an issue. You can have a heads up to say, "Okay, we need to treat this one differently this time."

Gardner: It’s the visibility to have repeatability when things go well, and, I suppose, visibility to avoid repeatability when things didn't go well.

Moses: Exactly.

Gardner: Let’s take a look at some of the innovation you've done. Tell me a bit about what you've worked with in terms of Quality Center in some of your own integration or tweaking?

Bell: One thing that we've been able to do with Quality Center is connect it with Quick Test Pro, and we do have Quality Center 10, as well as Quick Test Pro 10. We've been able to build our automation and store those in the Test Plan tab of Quality Center.

This has really been beneficial for us, when we go into our test labs and build our test set. We're able to take all of these automated pieces and combine them into test set. What this has allowed us to do is run all of our automation as one test set. We've been able to run those on a remote box. It's taken our regression test time from one person for five days, down to zero people and approximately an hour and 45 minutes.

Also, with the Test Lab tab, we're able to schedule these test sets to run during off hours. A lot of times our automation for things such as regression or sanity, can run on off hours. We schedule those to run at perhaps 6 o'clock in the morning. Then, when we come in at 8 o'clock in the morning, all of those tests would have already run.

That frees up our testers to be doing more of the manual functional testing and that allows us to know that we have complete coverage with the automation, as well as our sanity pieces. So, that's a unique way that we've used Quality Center to help manage that and to reduce our testing times by over 50 percent.

Gardner: Thank you, John. David, there have been some ways in which your larger goals as a business have been either improved upon or perhaps better aligned with the whole development process. I guess I'm looking for whether there is some payback here in terms of your larger business goals?

Moses: It definitely is. It goes back to speed to market with new functionality and making the customer's experience better. In all of our self-service products, it's very important that we test from the customers’ point of view.

We deliver those products that make it easier for them to use our services. That's one of the things that always sticks in my mind, when I'm at an airport, and I'm watching people use the kiosk. That's one of the things we do. We bring our people out to the airports and we watch our customers use our products, so we get that inside view of what's going on with them.

A lot on the line

I'll see people hesitantly reaching out to hit a button. Their hand may be shaking. It could be an elderly person. It could be a person with a lot on the line. Say it’s somebody taking their family on vacation. It's the only vacation they can afford to go on, and they’ve got a lot of investment into that flight to get there and also to get back home. Really there's a lot on the line for them.

A lot of people don’t know a lot about the airline industry and they don’t realize that it's okay if they hit the wrong button. It's really easy to start over. But, sometimes they would be literally shaking, when they reach out to hit the button. We want to make sure that they have a good comfort level. We want to make sure they have the best experience they could possibly have. And, the faster we can deliver products to them, that make that experience real for them, the better.

Gardner: I should think the whole notion of self service is usually important. It's important for the customer to be able to move through and do things their way, and I suppose there are some great cost savings and efficiencies on your end as well.

Dave, you could just highlight a little bit about how the whole notion of self service embedded into applications. It's important how some of the quality assurance tools and processes have helped there.

Moses: I go back to anytime you have to give up whenever you're having an issue with products, while you're online. You're on a website, and you have to call customer service. I think most people just sort of feel defeated at that point. People like to handle things themselves. You need a channel there for the customer to go to, if they need additional help.

So many clients and customers these days are so tech savvy. They know the industry they are in, and they know the tools they're working with, especially frequent flyers. I'd venture to say that most frequent flyers can hit the airport, check-in, get through security, and get to their plane really quickly. They just know their airports and they know everything they need to know about their flight, because this is where they live part of their lives.

You don't want to make them wait in line. You don't want to make them wait on a phone tree, when they make a phone call. You want them to be able to walk into the airport, hit a couple of buttons, get through security, and get to their gate.

By offering these types of products to the customers, you give them the best of both worlds. You give them a fast pass to check in. You give them a fast pass book. But, you can also give the less-experienced customer an easy-to-understand path to do what they need as well.

Gardner: And, to get those business benefits, those customer loyalty benefits, is really a function of good software development overall, isn't it?

Moses: Exactly. You have to give the customer the right tools that they want to get the job done for them.

Gardner: For other enterprises that are perhaps are going to be working towards a higher degree of quality in their software, but probably also interested in reducing the time to develop and time to value, do you have any suggestions, now that you’ve gone through this, that you might offer to them?

Interim approach

Bell: In using Quality Center, we've used an interim approach. Initially, we just used the Defects tab of Quality Center. Then, we slowly began to add the Requirements piece, and then Test Cases, and ultimately the Releases and Cycles.

One thing that we've found to be very beneficial with Quality Center is that it shows the development organization that this just isn't a QA tool that a QA team uses. What we've been able to do by bringing the requirements piece into it and by bringing the defects and other parts of it together, is bring the whole team on board to using a common tool.

In the past, a lot of people have always thought of Quality Centers as just a little tool that the QA people use in the corner and nobody else needs to be aware of. Now, we have our business analysts, project managers, and developers, as well as the QA team and even managers, because each person can get a different view of different information.

From Dashboard, your managers can look at your trends and what type of overall development lifecycle is coming through. Your project managers can be very involved in pulling the number of defects and see which ones are still outstanding and what the criticality of that is. The developers can be involved via entering information in on defects when those issues have been resolved?

We've found that Quality Center is actually a tool that has drawn together all of the teams. They're all using a common interface, and they all start to recognize the importance of tying all of this together, so that everyone can get a view as to what's going on throughout the whole lifecycle.

Moses: John hits on a really good point there. You have to realize the importance of it, and we did a long time ago. We've realized the importance of automating and we've realized the importance of having multiple groups using the same tool.

In all honesty, we were just miserable in our own history of trying to get those to work. You really take certain shots at it. For the past eight years, if we can go back that far, we've been using Quality Center tools for Test Director, just trying to get things automated, using the tools we had at the time.

The one thing that we never actually did was dedicate the resources. It's not just a tool. There are people there too. There are processes. There are concepts you're going to have to get in your head to get this to work, but you have to be willing to buy-in by having the people resources dedicated to building the test scripts. Then, you're not done. You've got to maintain them. That's where most people fall short and that's where we fell short for quite some time.

Once we were able to finally dedicate the people to the maintenance of these scripts to keep them active and running, that's where we got a win. If you look at a web site these days, it's following one of two models. You either have a release schedule, that’s a more static site, or you have a highly dynamic site that's always changing and always throwing out improvements.

We fit into that "Speed Wins," when we get the product out for the customers’ trading, and improve the experience as often as possible. So, we’re a highly dynamic site. We'll break up to 20 percent of all of our test scripts, all of our automated test scripts, every week. That's a lot of maintenance, even though we're using a lot of reusable code. You have to have those resources dedicated to keep that going.

Gardner: Well, I appreciate your time. We've been talking about the quality assurance process and the use of some HP tools. We've been learning about experiences from Delta Air Lines development executives. I want to thank our guests today, David Moses, Manager of Quality Assurance for Delta.com in the self-service function there. Thank you, David.

Moses: Thank you, very much.

Gardner: We've also been joined by John Bell, Senior Test Engineer there at Delta Air Lines. Thanks to you too, John.

Bell: It's been a pleasure.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 conference in Washington, DC.

Look for other podcasts from this HP event on the hp.com website, as well as via the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live Discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast with Delta Air Lines development leaders on gaining visibility into application testing to improve customer self-service experience. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: