Showing posts with label McKesson. Show all posts
Showing posts with label McKesson. Show all posts

Tuesday, June 28, 2011

Discover Case Study: Health Care Giant McKesson Harnesses HP ALM for Data Center Transformation and Dev-Ops Performance Improvement

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how McKesson has migrated data centers into fewer locations, while improving overall metrics of applications performance.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas. We're here on the Discover show floor the week of June 6 to explore some major enterprise IT solutions, trends, and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions.

We're now going to focus on McKesson Corp., and how they're improving their operations and reducing their mean time to resolution. We'll also explore applications quality assurance, test, and development, and how they're progressing toward a modernization front on those efforts as well.

We might even get into a bit of how these come together for an application lifecycle management and dev-ops benefit. Here to help us understand these issues better -- and their experience and success -- is Andy Smith, Vice President of Application Hosting Services at McKesson. Welcome, Andy.

Andy Smith: Thank you.

Gardner: We're also here with Doug Smith, Vice President of Data Center Transformation at McKesson. Welcome, Doug.

What we've seen through the improvement in the processes and the improvement in the tools has been a marked improvement in all of our metrics.



Doug Smith: Thank you, Dana.

Gardner: First, we might want to get people familiar, if they are not already, with McKesson. Andy Smith, tell us a little bit about McKesson, the type of organization you are, and the extent. It’s quite a large organization you have for IT activities there as well.

Andy Smith: McKesson is a Fortune 15 healthcare company primarily in three areas: nurse call centers, medical pharmaceutical distribution, and a healthcare software development company.

Gardner: And, you have a very large and distributed IT organization. I've heard about it before, but let’s go through that a little bit again if you don’t mind.

Andy Smith: It’s a very federated model. Each business unit has its own IT department responsible for the applications, and in some cases, their own individual data centers. Through Doug’s data center transformation program, we've been migrating those data centers into fewer corporate locations, and I'm responsible for running the infrastructure in those corporate locations.

Gardner: Andy, tell us about what you've been doing in order to get to faster time to market for your services, meeting your service level agreement (SLA) obligations internally, and how you reduce your meantime to resolution. What’s been the story there?

Improving processes

Andy Smith: What we've been doing over a little more than two years is improving our processes into ITIL v3. We focused heavily on change management, event management, and configuration management. At the same time, in parallel, we introduced the HP Tool Suite, for monitoring and configuration management, asset management, and automation.

What we've seen through the improvement in the processes and the improvement in the tools has been a marked improvement in all of our metrics. We've seen a drop in our Tier 1 outages of 54 percent during the last couple of years, as we implemented this tool. We've got three years worth of metrics now, and every year, the metrics have declined compared to the prior year. We've also seen an 86 percent drop in the breaches of those Tier 1 SLAs.

Gardner: That’s very impressive. Doug Smith, tell us what you've been doing with data center transformation and how you're working toward a higher level of quality with the test development and the upfront stages of applications?

Doug Smith: Well, Dana, we've been on this road of transformation now for about three and a half years. In the beginning, we focused on our production environments, which generally consist of fairly predictable workloads across multiple business units, and as Andy mentioned, quite a variety actually of models. In the past, the business units have obtained a great deal of autonomy in how they manage their infrastructure.

The first thing was to pull together the infrastructure and go through a consolidation exercise, as well as an optimization of that infrastructure. There we focused heavily on virtualization, as well as optimization of our storage environment, and to Andy’s point around process, heavily invested in process improvement.

We look to continue to take advantage, both from an infrastructure perspective as well as a tools perspective, in how we can facilitate our developers through a more rapid development cycle, more securely, and with higher quality outcomes for our customers.



A couple of years into this, we began to look at our development environment. McKesson has several thousand developers globally, and these developers spread across multiple product sets in multiple countries.

If you think about our objectives around security, quality, and agility, we look to continue to take advantage, both from an infrastructure perspective as well as a tools perspective, in how we can facilitate our developers through a more rapid development cycle, more securely, and with higher quality outcomes for our customers.

Gardner: So, it sounds as if both of you have relied increasingly on automation and integration and federation for many of the products that support these activities. Is there anything in particular, at a philosophical level, about why managing and governing across multiple products, but with governance or management capabilities is so important? Let’s start with you, Andy.

Andy Smith: When we first started looking at new tools, we recognized that we had a lot of point solutions that may have been best-in-breed, but they were a standalone solution. So, we weren’t getting the full benefits of the integration. As we looked at the next generation of tools, we wanted a tool suite that was fully integrated, so that the whole was better than the sum of the parts is probably the best way to put it.

We felt HP had progressed the farthest of all the competition in generating that full suite of tools to manage a data center environment. And, we believe we're seeing the benefits of that, because all these tools are working together to help improve our SLAs and shorten those mean time to restore.

Gardner: Doug Smith, any thoughts on that same level of the whole greater than the sum of the parts?

Governance in place

Doug Smith: Absolutely. It's not unique, but to a large business like McKesson, as a federation, we have businesses that retain their autonomy and their decision-making. The key is to have that governance in place to highlight the opportunity at an enterprise level to say that if we make the investments, if we coordinate our activities, and if we pull together, we actually can achieve outcomes greater than we could individually.

Gardner: Doug Smith, you've been using the application development function as a first step towards a larger data center transformation effort, and you've been an early adopter for that set of application.

At the same time, Andy Smith has been involved with trying to make operations run more smoothly. Do these come together? Is there a better ability to create an end-to-end process for development and operations and perhaps provide a feedback loop among and between them.

This is sort of dev-ops question. Andy Smith, how does that strike you? Is there something even greater, maybe perhaps a greater whole among the sum of even more parts?

Andy Smith: I believe so, because for the products that McKesson develops and sells to the healthcare industry, in many cases, we're also hosting them within our data centers as an application service provider.

I can take the testing scripts that were used to develop the products and use those in the BAC Suite to test and monitor the application as it runs in production. So, we're able to share that testing data and testing schemas in the production world to monitor the live product.



And the bigger sum of the whole to me is the fact that I can take the testing scripts that were used to develop the products and use those in the BAC Suite to test and monitor the application as it runs in production. So, we're able to share that testing data and testing schemas in the production world to monitor the live product.

Gardner: Doug Smith, thoughts on the same dev-ops benefit? How does that strike you?

Doug Smith: As you look across product groups and our ability to scale this, and with Andy’s capability that he is developing and delivering on, you really see an opportunity for a company like McKesson to continue to deliver on its mission to improve the health of the businesses that we serve in healthcare. And, we can all relate to the benefits of driving out cost and increasing efficiency in healthcare.

So, at the highest level, anything that we can do to facilitate a faster and more agile development process for the folks who are delivering software and services in our organization, as well as help them provide a foundation and a layer where then they can talk to each other and build additional services and value-added services for our customers on top of that layer, then we have something that really can have an impact for all of us.

Gardner: Well, very good. Thank you for sharing that. I want to thank our guests. We've been here talking about the benefits of better tools for operations, as well as application development and hosting, and sharing their experience has been Andy Smith. He is the Vice President of Application Hosting Services at McKesson. Thanks so much, Andy.

Andy Smith: Thank you.

Gardner: And also Doug Smith, Vice President of Data Center Transformation at McKesson. Thank you, Doug.

Doug Smith: Thank you, Dana.

Gardner: And thanks to our audience for joining this special BriefingsDirect podcast coming to you from the HP Discover 2011 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of user experience discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how McKesson has migrated data centers into fewer locations, while improving overall metrics of applications performance. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Thursday, January 06, 2011

Case Study: How McKesson Develops Software Faster and Better with Innovative Use of New HP ALM 11 Suite

Transcript of a sponsored BriefingsDirect podcast, part of a series on application lifecycle management and HP ALM 11 from the recent HP Software Universe 2010 conference in Barcelona.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you in conjunction with the HP Software Universe 2010 Conference last month in Barcelona.

We're here to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of Software Universe Live discussions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Our customer case study today focuses on McKesson and how their business has benefited from advanced application lifecycle management (ALM). To learn more about McKesson's innovative use of ALM and its early experience with HP's new ALM 11 release, I'm here with Todd Eaton, Director of ALM Tools and Services at McKesson. Welcome, Todd.

Todd Eaton: Thanks, Dana.

Gardner: I know you've been involved with ALM for quite some time, but what is it about ALM now in your business that makes it so important and beneficial?

Eaton: In our business at McKesson, we have various groups that develop software, not only for internal use, but also external use by our customers and software that we sell. We have various groups within McKesson that use the centralized tools, and the ALM tools are pretty much their lifeblood. As they go through the process to develop the software, they rely heavily on our centralized tools to help them make better software faster.

Gardner: Is ALM something you use within the groups -- and then also to bind those groups; that is to say, there is a tactical ... and then even strategic benefit as well?

Eaton: Yes. The ALM suite that HP came out with is definitely giving us a bigger view. We've got QA managers that are in the development groups for multiple products, and as they test their software and go through that whole process, they're able to see holistically across their product lines with this.

We've set up projects with the same templates. With that, they have some cohesion and they can see how their different applications are going in an apples-to-apples comparison, instead of like the old days, when they had to manually adjust the data to try to figure out what their world was all about.

Gardner: At this point, are there any concrete benefits, either in terms of business benefits, or in the IT application development side of the business that you can point to that these ALM innovations have supported?

Better status

Eaton: There are a couple of them. When HP came up with ALM 11, they took Quality Center and Performance Center and brought them together. That's the very first thing, because it was difficult for us and for the QA managers to see all of the testing activities. With ALM, they're able to see all of it and better gauge where they are in the process. So, they can give their management or their teams a better status of where we are in the testing process and where we are in the delivery process.

The other really cool thing that we found was the Sprinter function. We haven't used it as much within McKesson, because we have very specific testing procedures and processes. Sprinter is used more as you're doing ad hoc testing. It will record that so you can go back and repeat those.

How we see that being used is by extending that to our customers. When our customers are installing our products and are doing their exploratory testing, which is what they normally do, we can give them a mechanism to record what they are doing. Then, we can go back and repeat that. Those are a couple of pretty powerful things in the new release that we plan to leverage.

Gardner: How would you describe the problem that we need to solve here? Is this a problem of communication, of measurement, perhaps workflow management, or all the above? How would you characterize what's wrong with how application development has been done? I don't mean to point to you as falling short on this at all. This is a general issue, but what is the problem that you think ALM is really addressing?

Eaton: That's a good point. When we're meeting at various conferences and such, there's a common theme that we hear. One is workflow. That's a big piece. ALM goes a long way to be able to conquer the various workflows. Within an organization, there will be various workflows being done, but you're still able to bring up those measurements, like another point that you are bringing up, and have a fairly decent comparison.

They can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.



With the various workflows in the past, there used to be a real disparate way of looking at how software is being developed. But with ALM 11, they're starting to bring that together more.

The other piece of it is the communication, and having the testers communicate directly to those development groups. There is a bit of "defect ping-pong," if you will, where QA will find a defect and development will say that it's not a defect. It will go back and forth, until they get an agreement on it.

ALM is starting to close that gap. We're able to push out the use of ALM to the development groups, and so they can see that. They use a lot of the functions within ALM 11 in their development process. So, they can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.

Gardner: It sounds like it’s beginning to quicken the pace of how you go about these things, but in addition to that, are you exploiting agile development practices, and is this something that's helping you if you are?

Eaton: We have several groups within our organization that use agile development practices. What we're finding is that the way they're doing work can integrate with ALM 11. The testing groups still want to have an area where they can put their test cases, do their test labs, run through their automation, and see that holistic approach, but they need it within the other agile tools that are out there.

It's integrating well with it so far, and we're finding that it lends itself to that story of how those things are being done, even in the agile development process.

Gardner: You're a large organization, a large healthcare provider and insurer. Maybe you could tell us a little bit about McKesson, where you're based, and the size and extent of your application development organization.

Company profile

Eaton: McKesson is a Fortune 15 company. It is the largest health-care services company in the U.S. We have quite a few R&D organizations and it spans across our two major divisions, McKesson Distribution and McKesson Technology solutions.

In our quality center, we have about 200 projects with a couple of thousand registered users. We're averaging probably about 500 concurrent users every minute of the day, following-the-sun, as we develop. We have development teams, not only in the U.S, but nearshore and offshore as well.

We're a fairly large organization, very mature in our development processes. In some groups, we have new development, legacy, maintenance, and the such. So, we span the gamut on all the different types of development that you could find.

Gardner: Well, that's interesting, because I wanted to explore the size of the organization. It sounded a moment ago as if you were able to support different styles, different cultures, different maturity levels, as you have mentioned, among and between these different parts of your development cycle, all using the same increasingly centralized ALM approach. Is that fair?

Eaton: Yeah, that's fair. That's what we strive for. In my group, we provide the centralized R&D tools. ALM 11 is just one of the various tools that we use, and we always look for tools that will fit multiple development processes.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.



We also make sure that it covers the various technology stacks. You could have Microsoft, Java, Flex, Google Web Toolkit, that type of thing, and they have to fit that. You also talked about maturity and the various maturity models, be it CMMI, ITIL, or when you start getting into our world, we have to take into consideration FDA.

When we look at tools, we look at those three and at deployment. Is this going to be internally used, is this going to be hosted and used through an external customer, or are we going to package this up and send it out for sale?

We need tools that span across those four different types, four different levels, that they can adapt into each one of them. If I'm a Microsoft shop that’s doing Agile for an internal developed software, and I am CMMI, that's one. But, I may have a group right next door that's waterfall developing on Java and is more an ITIL based, and it gets deployed to a hosted environment.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.

Gardner: So, it's the benefits of decentralized and the benefits of centralized in terms of the system-of-record approach, having at least a metaview of what's going on, even though there is still flexibility at the edge.

Eaton: Correct. ALM 11 had a good foundation. The test cases, the test set, the automated testing, whether functional or performance, the source of truth for that is in the ALM 11 product suite. And, it's fairly well-known and recognized throughout the company. So, that is a good point. You have to have a source of truth for certain aspects of your development cycle.

Gardner: Of course, your industry has significant level of regulation and compliance issues. Is ALM 11 something that's been a benefit in that regard?

Partner tools

Eaton: It has been a benefit. There are partner tools that go along with ALM 11 that help us meet those various regulations. Something that we're always mindful of, as we develop software, is not only watching out for the benefit of our customers and for our shareholders, but also we understand the regulations. New ones are coming out practically every day, it seems. We try to keep that in mind, and the ALM 11 tool is able to adapt to that fairly easily.

Gardner: You've been an early adopter. You've implemented certain portions of ALM 11, and you have a great deal of experience with ALM as a function. Looking back on your experience, what would you offer as advice to someone who might just be getting their feet wet in regard to either ALM or specifically ALM 11?

Eaton: When I talk to other groups about ALM 11 and what they should be watching out for, I tell them to have an idea of how your world is. Whether you're a real small shop, or a large organization like us, there are characteristics that you have to understand. How I identify those different stacks of things that they need to watch out for; they need to keep in mind their organization’s pieces that they have to adapt to. As long as they understand that, they should be able to adapt the tool to their processes and to their stacks.

Most of the time, when I see people struggling, it's because they couldn’t easily identify, "This is what we are, and this is what we are dealing with." They usually make midstream corrections that are pretty painful.

Gardner: And your title is interesting to me, Todd: Director of ALM Tools and Services. This is an organizational question, I suppose. Do you think it is a good policy, now that you have had experience in this, to actually devoting an individual or maybe a team to just overseeing the ALM tools, which in fact oversees the ALM process?

They look to us to be able to offload that and have a team to do that.



Eaton: That's an interesting point, and something that we've done at McKesson that appears to work out real well. When I deal with various R&D vice presidents and directors, and testing managers and directors as well, the thing that they always come back to is that they have a job to do. And one of the things they don't want to have to deal with is trying to manage a tool.

They've got things that they want to accomplish and that they're driven by: performance reviews, revenue, and that type of thing. So, they look to us to be able to offload that, and to have a team to do that.

McKesson, as I said, is fairly large, thousands of developers and testers throughout the company. So, it makes sense to have a fairly robust team like us managing those tools. But, even in a smaller shop, having a group that does that -- that manages the tools -- can offload that responsibility from the groups that need to concentrate on creating code and products.

Gardner: Well, great. Thank you for sharing your experiences. We've been hearing about ALM best practices and the use of HP's new ALM 11 by an early adopter and his experience, Todd Eaton, Director of ALM Tools and Services at McKesson. Thank you, Todd.

Eaton: You're welcome, Dana. It was nice talking to you.

Dana Gardner: I want to thank also our listeners for joining the special BriefingsDirect podcast, coming to you in conjunction with the HP Software Universe 2010 Conference.

Look for other podcasts from this event on the hp.com website, as well as via the BriefingsDirect network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast, part of a series on application lifecycle management and HP ALM 11 from the HP Software Universe 2010 conference in Barcelona, Spain. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, June 15, 2010

McKesson Shows Bringing Testing Tools on the Road Improves Speed to Market and Customer Satisfaction

Transcript of a BriefingsDirect podcast from the HP Software Universe 2010 Conference in Washington, DC on field-testing software installations using HP Performance Center products.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

Our customer case-study today focuses on McKesson Corp., a provider of certified healthcare information technology, including electronic health records, medical billing, and claims management software. McKesson is a user of HP’s project-based performance testing products used to make sure that applications perform in the field as intended throughout their lifecycle.

To learn more about McKesson’s innovative use of quality assurance software, please join me in welcoming Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. Welcome to the show, Todd.

Todd Eaton: Thank you.

Gardner: Todd, tell me a little bit about what's going on in the market that is making the performance-based testing, particularly onsite, such an important issue for you.

Eaton: Well, looking at McKesson’s businesses, one of the things that we do is provide software for sale for various healthcare providers. With the current federal government regulations that are coming out and some of these newer initiatives that are planned by the federal government, these providers are looking for tools to help them do better healthcare throughout their enterprises.

With that in mind, they're looking to add functionality, they're looking to add systems, and they look to McKesson, as the leader in healthcare, to provide those solutions for them. With that in mind, our group works with the various R&D organizations within McKesson, to help them develop software for the needs of those customers.

Gardner: And what is it about performance-based testing that is so important now. We've certainly had lots of opportunity to trial things in labs and create testbeds. What is it about the real-world delivery that's important?

Eaton: It's one thing that we can test within McKesson. It's another thing when you test out at the customer site, and that's a main driver of this new innovation that we’re partnering up with HP.

When we build an application and sell that to our customers, they can take that application, bring it into their own ecosystem, into their own data center and install it onto their own hardware.

Controlled testing

The testing that we do in our labs is a little more controlled. We have access to HP and other vendors with their state-of-the-art equipment. We come up with our own set of standards, but when they go out to the site and get put in to those hospitals, we want to ensure that our applications act at the same speed and same performance at their site that we experience in our controlled environment. So, being able to test on their equipment is very important for us.

Gardner: And it's I suppose difficult for you to anticipate exactly what you're going to encounter, until you're actually in that data center?

Eaton: Exactly. Just knowing how many different healthcare providers there are out there, you could imagine all the different hardware platforms, different infrastructures, and the needs or infrastructure items that they may have in their data centers.

Gardner: This isn’t just a function of getting set up, but there's a whole life-cycle of updates, patches, improvements, and increased functionality across the application set. Is this something that you can do over a period of time?

Eaton: Yes, and another very important thing is using their data. The hospitals themselves will have copies of their production data sets that they keep control of. There are strict regulations. That kind of data cannot leave their premises. Being able to test using the large amount of data or the large volume of data that they will have onsite is very crucial to testing our applications.

Gardner: Todd, tell me the story behind gaining this capability of that performance-based testing onsite -- how did you approach it, how long has it been in the making, and maybe a little bit about what you’re encountering?

Eaton: When we started out, we had some discussion with some of the R&D groups internally about our performance testing. My group actually provides a performance-testing service. We go out to the various groups, and we’re doing the testing.

We always look to find out what we can do better. We’re always doing lesson learns and things like that and talking with these various groups. We found that, even though we did a very good job of doing performance testings internally, we were still finding defects and performance issues out at the site, when we brought that software out and installed it in the customer’s data center.

After further investigation, it became apparent to us that we weren’t able to replicate all those different environments in our data center. It’s just too big of a task.

The next logical thing to do was to take the testing capabilities that we had and bring it all out on the road. We have these different services teams that go out to install software. We could go along with them and bring the powerful tools that we use with HP into those data centers and do the exact same testing that we did, and make sure that our applications were running as expected on their environments.

Gardner: Getting it right the first time is always one of the most important things for any business activity. Any kind of failure along the way is always going to cost more and perhaps even jeopardize the relationship with the customer.

Speed to market

Eaton: Yeah, it jeopardizes the relationship with the customer, but one of the things that we also drive is speed to market. We want to make sure that our solutions get out there as fast as possible, so that we can help those providers and those healthcare entities in giving the best patient care that they can.

Gardner: What was the biggest hurdle in being able to, as you say, bring the testing capability out to the field. What were some of the hang-ups in order to accomplish that?

Eaton: Well, the tool that we use primarily within McKesson is Performance Center, and Performance Center is an enterprise-based application. It’s usually kept where we have multiple controllers, and we have multiple groups using those, but it resides within our network.

So, the biggest hurdle was how to take that powerful tool and bring it out to these sites? So, we went back to our HP rep, and said, "Here’s our challenge. This is what we’ve got. We don’t really see anything where you have an offering in that space. What can you do for us?"

Gardner: How far and wide have you been able to accomplish this? Are you doing it in terms of numbers of facilities, in what kind of organizations?

Eaton: Right now we have it across the board in multiple applications. McKesson develops numerous applications in the healthcare space, and we’ve used those across the board. Currently, we have two engagements going on simultaneously with two different hospitals, testing two different groups of applications, and even the application themselves.

I’ve got one site that’s using it for 26 different applications and other that’s using it for five. We’ve got two teams going out there, one from my group and one from one of the internal R&D groups that are assisting the customer and testing the applications on their equipment.

Gardner: From these experiences so far, are there metrics of success, paybacks, not only for you and McKesson, but also for the providers that you service?

Eaton: The first couple of times we did this, we found that we were able to reduce the performance defects dramatically. We’re talking something like 40-50 percent right off the bat. Some of the timing that we had experienced internally seemed to be fine, well within SLAs. But as soon as I got out to a site and onto different hardware configurations, it took some application tuning to get it down. We were finding 90 percent increases with our help of continual testing and performance tweaks.

Items like that are just so powerful, when you are bringing that out to the various customer, and can say, "If you engage us, and we can do this testing for you, we can make sure that those applications will run in the way that you want them to."

Gardner: How about for your development efficiency? Are you learning some lessons on the road that you wouldn’t have had before that you can now bring into the next rep. Is there a feedback loop of sorts?

Powerful feedback

Eaton: Yes. It’s a pretty powerful one back to our R&D groups, because getting back to that data scenario, the volume and types of data that the customers have can be unexpected. The way customers use systems, while it works perfectly fine, is not one of the use cases that is normally found in some applications, and you get different results.

So, finding them out in the field and then being able to bring those back to our R&D groups and say, "This is what we’re seeing out in the field and this is how people are using it," gives them a better insight and makes them able to modify their code to fit those use cases better.

Gardner: Todd, is there any advice that you would give to those considering doing this, that is to say, taking their performance testing out on the road, closer to the actual site where these applications are going to reside?

Eaton: The main one is to work with your HP rep on what they have available for this. We took a product that everybody is familiar with, LoadRunner, and tweaked it so it became portable. The HP reps know a lot more about how they packaged that up and what’s best for different customers based on their needs. Working with a rep would be a big help in trying to roll this out to various groups.

Gardner: Okay, great. We’ve been learning about how McKesson is bringing performance-based testing products out to their customers’ locations and gaining a feedback capability as well as reducing time to market and making the quality of those applications near 100 percent right from the start.

I want to thank our guest. We’ve been joined by Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. Thank you so much Todd.

Eaton: You’re welcome. Nice talking to you.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Washington, DC.

Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this series of HP-sponsored Software Universe Live Discussions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.


Transcript of a BriefingsDirect podcast from the HP Software Universe 2010 Conference in Washington, DC on field-testing software installations using HP Performance Center products. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: