Wednesday, December 01, 2010

HP Software GM Jonathan Rende on How ALM Enables IT to Modernize Businesses Faster

Transcript of a sponsored BriefingsDirect podcast, part of a series on application lifecycle management and HP ALM 11 from the HP Software Universe 2010 conference in Barcelona.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Barcelona.

We're here the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of Software Universe Live discussions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

To learn more about HP’s big application lifecycle management (ALM) news, the release of ALM 11, and its impact on customers, please join me now in welcoming Jonathan Rende, Vice President and General Manager for Applications Business at HP Software. Welcome, Jonathan.

Jonathan Rende: Hey Dana. How are you doing?

Gardner: I'm doing well. I don’t think it’s an exaggeration to say that applications are more important than ever, and they're probably going to become even more important. What’s more, we're looking at a significant new wave of applications refresh. So it strikes me that we're at a unique time, almost an inflection point in the history of software. Am I overstating the case?

Rende: No, not at all, Dana. Over the last 25 years that I've been in the business, I've seen two or three such waves happen. Every seven to 10 years, the right combination of process and technology changes comes along, and it becomes economically the right thing to do for an IT organization to take a fresh look at their application portfolio.

What’s different now than in the previous couple of cycles is that, as you said, there is no lack of business applications out there. With those kind of impacts and requirements and responsibilities on the business, the agility and innovation of an application, is now synonymous with the agility and innovation of the applications themselves in the business.

Gardner: It seems like we're also at a point where we need to speed up the process. The legacy, the traditional means of application development, the sequential process, perhaps even the siloed organizational approach -- are all conspiring to hold us back. What needs to happen to break that logjam?

Rende: It’s not really the case that the people building, provisioning, testing, and defining the applications are lacking or don’t know what they're doing. It’s mostly that the practices and processes they're engaged in are antiquated.

What I mean by that is that today, acquiring or delivering applications in a much more agile manner requires a ton more collaboration and transparency between the teams. Most processes and systems supporting those processes just aren’t set up to do that. We're asking people to do things that they don’t have the tools or wherewithal to complete.

Gardner: The more I hear about ALM 11, it seems to me that not only are you trying to bring together the disparate parts of the application process, you're also extending it. An analogy might be an umbilical cord or cords into other parts of the business, so that they aren’t isolated. Does that hold true? Are we looking at both unification and an extension into the large organization?

Lifecycle roles

Rende: Exactly. Not only are we bringing together -- through collaboration, transparency, linking, and traceability -- the core app lifecycle roles of business analysts, quality performance, security professionals, and developers, but we're extending that upstream to program management office and project managers. We're extending it upstream to architects. Those are very important constituents upstream who are establishing the standards and the stacks and the technologies that will be used across the organization.

Likewise, downstream, we're extending this to the areas of service management and service mangers who sit on help desks who need to connect. Their lifeblood is the connection with defects. Similarly, people in operations who monitor applications today need to be linked into all the information coming upstream along with those dealing with change and new releases happening all the time.

So, yes, it extends upstream much further to a whole group of people -- and also downstream to a whole group of audiences.

Gardner: What are the businesses looking for? What do they need? We've defined the problem -- and clearly there is a lot of room for improvement. What do enterprises and governments then do about it?

Rende: Number one, they need to be able to share important information. There’s so much change that happens from the time an application project or program begins to the time that it gets delivered. There are a lot of changing requirements, changing learnings from a development perspective, problems that are found that need to be corrected.

All of that needs to be very flexible and iterative. You need those teams to be able to work together in very short cycles, so that they can effectively deliver, not only on time, but many times even more quickly than they did in the past. That’s what’s needed in an organization.

There isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective.



On top of that, there isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective. Most organizations don’t choose just Visual Studio to write their applications in -- or just Java. Many have a combination of either of those, or both of those, along with packaged applications off-the-shelf.

So, one of the big requirements is heterogeneity for those applications, and the management of those applications from a lifecycle approach should be accommodating of any environment. That’s a big part of what we do.

Gardner: It sounds as if you need to be inclusive in terms of the technologies that you relate to, but at the same time -- based on what we spoke about a minute ago -- you need to also be more of a single system of record, pulling it all together. How can we conceptualize this, being agnostic, but also being unified?

Rende: You have to be able to maintain and manage all of the information in one place, so that it can be linked, and so you can draw the right, important information in understanding how one activity affects another.

But that process, that information that you link, has to be independent of specific technology stacks. We believe that, over the past few years, not only have we created that in our quality solutions, in our performance solutions, but now we have added to that with our ALM 11 release -- the same concepts but in a much broader sense.

Integrating to other environments

B
y bringing together those core roles that I mentioned before, we've been able to do that from a requirements perspective, independent of [deployment] stack -- and from a development environment. We integrate to other environments, whether it’s a Microsoft platform, a Java platform, or from CollabNet. The use-cases that we've supported work in all of those environments very tightly -- between requirements and tests -- and pull that information all together in one place.

Gardner: Jonathan, this really strikes me as a maturity inflection point for application lifecycle development to deployment, and reminds me a little bit what happened in data several years ago. The emphasis became more on the management of the metadata about the data, letting the data reside where it may.

Is there an analogy or similarity between what you are talking about in terms of ALM metadata, if you will, over the applications process, while at the same time allowing the process to exist in a variety of different technologies, or even vendor supported platforms?

Rende: It’s very similar, if you think about different activities and the work that’s done in those different activities. A business analyst or a subject matter expert who is generating requirements, captures all that information from what he hears of what’s needed, the business processes that need to built, the application, and the way it should work. He captures all of that information, and it needs to reside in one single place. However, if I'm a developer, I need to work off of a list of a set of tasks that build to those requirements.

It’s important that I have a link to that. It’s important that my priorities that I put in place then map to the business needs of those requirements. At the same time, if I'm in quality-, performance-, and security-assurance, I also need to understand the priority of those.

So, while those requirements will fit in one place, they'll change and they'll evolve. I need to be able to understand how that impacts my test plans that I am building.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.



Maybe the last example is a developer who is building toward all these priorities, what he is given as requirements. Those, in turn, need to also link as changes to everything that’s happening in the quality, performance, and security areas. Although the information is distinct, it has to be related and that can only be done if you store it in one place.

Gardner: So we're unifying, managing, and governing -- but we're still able to adapt and be flexible given the different environments -- the different products -- in a variety of different types of organizations, as well as across departments within those organizations -- a great deal of heterogeneity.

So, if you do this right, what sort of paybacks do you get? I'm hearing some pretty interesting things about delivery and defects and even managerial or operational benefits?

Rende: Huge benefits. If you look at some of the statistics that are thrown around from third parties that do this research on an annual basis: In almost two-thirds of projects today, application projects still fail. Then, you look at what benefits can be put in place, if you put together the right kind of an approach, system, and automation that supports that approach.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.

Cutting cost of delivery

We're seeing organizations similarly cut the cost of releasing an application, that whole delivery process -- cut the cost of delivery in half. And, that’s not to mention side benefits that really have a far more reaching impact later on, identifying and eliminating on creation up to 80 percent of the defects that would typically be found in production.

As a lot of folks who are close to this will know, finding a defect in production can be up to 500 times more expensive to fix than if you address it when it’s created during the development and the test process. Some really huge benefits and metrics are already coming from our customers who are using ALM 11.

Gardner: That, of course, points up that those organizations that do this well, that make this a core competency, should have a significant competitive advantage.

Rende: A big advantage. Again, if you go back to the very beginning topic that we discussed, there isn’t a business, there isn’t a business activity, there isn’t a single action within corporate America that doesn’t rely on applications. Those applications -- the performance, the security, and the reliability of those systems -- are synonymous with that of the business itself.

If that’s the case, allowing organizations to deploy business critical processes in half the time, at half the cost, at a much higher level of quality, with a much reduced risk only reflects well on the business, and it’s a necessity, if you are going to be a leader in any industry.

It really scales from the smallest to the largest organization, and from a single geography to multiple geographies.



Gardner: This cuts across the globe. This isn’t just for advanced economies or developing emerging economies. It’s pretty much across the board?

Rende: Across the board in a couple directions or vectors. One, from small organizations to large organizations, ALM 11 allows small project teams to be able to take advantage of this and get the same benefits as well as large Fortune 10 enterprises that have hundreds of projects, which get linked together into a single release, and those projects are being built in unison around the globe.

It’s really scales from the smallest to the largest organization, and from a single geography to multiple geographies, so they can collaborate, because, as we know, development can happen in many locations today. In the final equation, you have to make sure that what [applications] you're releasing are reflective of an organization, no matter where those activities take place.

Gardner: And as far as that goes for all types of organizations, we have enterprises, small and medium size businesses, we are also talking about governments, and we're also talking about now the variety of different hosting organizations, whether it’s telecom, cloud, mobile, or what have you.

Rende: Exactly. There are so many different options of how people can deploy or choose to operate and run an application -- and those options are also available in the creation of those applications themselves. ALM 11 runs through on-premise deployment, or also through our software as a service (SaaS), so will allow flexibility.

Gardner: We've heard a lot about how important software is to HP as a larger organization across the company and its strategy. Is it fair to say that ALM 11 is a strategic initiative for HP? How does it fit into the bigger HP direction?

Deep software DNA

Rende: As you said, software and our software business are increasingly important. If you look at the leadership within the company today, our new CEO has a very deep software DNA. Bill Veghte, who came in from Microsoft, has 20 plus years. The rest of the leadership team here also has 20 plus years in enterprise software.

Aside from the business metrics that are so beneficial in software versus other businesses, there is just a real focus on making enterprise software one of the premier businesses within all of HP. You're starting to see that with investments and acquisitions, but also the investment in, more importantly, organic development and what’s coming out.

So, it’s clearly top of list and top of mind when it comes to HP. Our new CEO, Leo Apotheker, has been very clear on that since he came in.

Gardner: Super. We've heard a lot about ALM 11 here in Barcelona, and I expect we're going to be hearing more about how this relates to that larger software equation. I'm looking forward to that.

I want to thank you, Jonathan Rende, Vice President and General Manager for Applications Business in HP's Software & Solutions organization. I hope you're having a good show. I appreciate your time.

In the final equation, you have to make sure that what you're releasing is reflective of an organization, no matter where those activities take place.



Rende: Thanks very much, Dana. Hopefully, everybody can get out there and learn a little bit more about ALM and how it fits into some of the larger initiatives, applications, and transformation, that are really changing the entire industry. So good luck, everybody.

Gardner: Great. I want to thank also our listeners for joining the special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Barcelona.

Look for other podcasts from this event on the hp.com website, as well as via the BriefingsDirect network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on application lifecycle management and HP ALM 11 from the HP Software Universe 2010 conference in Barcelona, Spain. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, November 30, 2010

HP's New ALM 11 Guides IT Through Shifting Landscape of Application Development and Service Requirements

Transcript of a sponsored BriefingsDirect podcast on application lifecycle management and HP ALM 11 from the HP Software Universe 2010 conference in Barcelona.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Barcelona.

We're here the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I’ll be your host throughout this series of HP-sponsored Software Universe Live Discussions. To learn more about HP’s application life-cycle management (ALM) news and its customer impact from the conference here, please join me now in welcoming Mark Sarbiewski, Vice President of Product Marketing for HP applications. Welcome, Mark.

Mark Sarbiewski: Thank you, Dana. Good to be with you.

Gardner: Good to be with you. We've seen, over the past several years, sort of a shifting landscape of how applications are delivered and deployed. It seems as if the traditional way of doing this just isn’t working, and there seems to be complexity, slowness, and quality issues. First, why is that, and then second, what is HP doing about it?

Sarbiewski: It’s a question that we talk to our customers about all the time. It boils down to the same old changes that we see sort of every 10 years. A new technology comes into play with all its great opportunity and problems, and we revisit how we do this. In the last several years, it’s been about how do I get a global team going, focused on potentially a brand-new process and approach.

You’ve got changes in how you are organized. You’ve got changes in the approach that people are taking. And, you’ve got brand-new technology in the mix and new ways of actually constructing applications. All of these hold great promise, but great challenges too. That's clashing with the legacy approach that people in the past took in building software.

Gardner: What is HP going to about this? We’ve got kind of an inflection point, a generational shift. Now, what’s the response?

Sarbiewski: The short answer is that that legacy approach is not going to be the right path for delivering modern applications. As far as the core problems that I just mentioned, we’ve been hard at work for a couple of years now, recasting and re-inventing our portfolio to match that modern approach to software, going through them one-by-one.

What are the new technologies that everybody is employing? We’ve got rich Internet technologies, Web 2.0, approaches and our technology is there. For composite applications, we’ve built a variety of capabilities that help people understand how to make the performance right with those technologies, keep the security and the quality high, while keeping the speed up.

Moving to Agile

So it’s everything from how do we do performance testing in that environment to testing things that don’t have interfaces, and how do we understand the impact of change on the systems like that. We’ve built capabilities that help people move to Agile as a process approach, things like fundamentally changing how they can do exploratory testing, and how they can bring in automation much sooner in the process of performance, quality, and security.

Lastly, we’ve been very focused on creating a single, unified system that scales to tens of thousands of users. And, it’s a web-based system, so that wherever the team members are located, even if they don’t work for you, they can become a harmonious part of the overall team, 24-hour cycles around the globe. It speeds everything up, but it also keeps everyone on the same page. It’s that kind of anytime, anywhere access that’s just required in this modern approach to software.

Gardner: As I'm hearing the news here at the show being rolled out, it occurs to me that we're bringing together aspects of this whole lifecycle that for decades been very distinct and different, usually from different vendors, and with wholly different platforms beneath them. So, why is it important that ALM 11 pretty much has an integrated system with all the stakeholders, all the team member focused in the same direction or at least integrated at some level? [See more on HP's new ALM 11 offerings.]

Sarbiewski: When I talk to customers, I ask them, how they're supporting software. If we talk about software delivery, it's fundamentally a team sport. There isn't a single stakeholder that does it all. They all have to play and do their part.

When they tell me they’ve got requirements management in Word, Excel, or maybe even a requirements tool, and they have a bug database for this, test management for that, and this tool here, on the surface it looks like they fitted everybody with a tool and it must be good. Right?

The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team.



The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team. The team’s work relates to each other. When requirements get created or changed, it's the ripple effect. What tests have to be modified or newly created? What code then has to be modified? When that code gets checked in, what tests has to be run? It’s the ripple effect of the work we talk about it as workflow automation. It's also the insight to know exactly where you are.

When the real question of how far am I on this project or what quality level am I at -- am I ready to release -- needs to be answered in the context of everyone’s work, I have to understand how many requirements are tested? Is my highest priority stuff working against what code?

So, you see the team aspects of it. There is so much latency in a traditional approach. Even if each player has their own tool, it's how we get that latency out and the finger-pointing and the miscommunication that also results. We take all that out of that process and, lo and behold, we see our customers cutting their delivery times in half, dropping their defect rates by 80 percent or more, and actually doing this more cheaply with fewer people.

Gardner: So clearly HP ALM 11 is not going to allow sacrifice in the overall process for some individual choice and benefits, but let's get into the actual parts here. We have elements that are updated around requirements, development, and quality. Tell me a little bit about the constituent parts of this overall umbrella.

Sarbiewski: In requirements management, one of the big new things that we’ve done is allow the import of business process models (BPMs) into the system. Now, we’ve got the whole business process flow that’s pulled right into the system. It can be pulled right from the systems like Eris or anything that’s putting in the standard business process modeling language (BPML) right into the system.

Actual business processes

Now, everyone who accesses ALM 11 can see the actual business process. We can start articulating that this is the highest priority flow. This step of the business process, maybe it's check credit or something like that, is an external thing but it's super-important. So, we’ve got to make sure we really test the heck out of that thing.

Everyone is aligned around what we’re doing, and all the requirements can be articulated in that same priority. The beautiful thing now about having all this in one place is that work connects to everything else. It connects to the test I set up, the test I run, the defects I find, and I can link it even back to the code, because we work with the major development tools like Visual Studio, Eclipse, and CollabNet.

Gardner: So what are the parts we have? We’ve got this really interesting requirements manager that’s integrated with BPM, and I want to get back to that in a moment. The second part is Performance Center update, and then we’ve got a new LoadRunner, right?

Sarbiewski: That’s exactly right. You mentioned Development Manager a minute ago. It's hugely important that we connect into the world of developers. They're already comfortable with their tools. We just want to integrate with that work, and that’s really what we’ve done. They become part of the workflow process. They become part of the traceability we have.

You mentioned performance testing. We have the industry leading solution here and major market share there. What we hear from our customers is that the coolest new technology they want to work with is also the most problematic from a performance standpoint.

The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested.



We went back to the drawing board and reinvented how well we can understand these great new Web 2.0 technologies, in particular Ajax, which is really pervasive out there. We now can script from within the browser itself. The big breakthrough there is if the browser can understand it, we can understand it. Before, we were sort of on the outside looking in, trying to figure out what a slider bar really did, and when a slider bar was moved what did that mean.

Now, we can generate a very readable script. I challenge anybody. Even a businessperson can understand, when they're clicking through an application, what gets created for the performance testing script.

We parameterize it. We can script logic there. We can suggest alternate steps. The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested. So we don't end up in that situation where it's great, you did a beautiful rich job, and it's such a compelling interface, but only works when 10 people are hitting the application. We've got to fix that problem.

It speeds everything up, because it's so readable and quick. And it just works seamlessly. We've tested against the top 40 websites, and they are out there out using all this great new technology and it's working flawlessly.

Gardner: So, we've had some significant improvements and upgrades. We’ve got better integration. We're looking at this at the process level and we brought in BPM. But, I also heard from the main stage presentation here in Barcelona about a couple of new things. We have Unified Functional Testing 11 and HP Sprinter. Could you help me understand a bit more about those?

Lots of pieces

Sarbiewski: Absolutely. If you think about a composite application, it's really made up of lots of pieces. There are application services or components. The idea is that if I’ve got something that works really well and I can reuse it as part of and combine it with maybe a few other things or in a couple of new pieces and I get new capability, I've saved money. I’ve moved faster and I'm delivering innovation to the business in a much better, quicker way and it should be rock-solid, because I can trust these components.

The challenge is, I'm not making up software made of lots of bits and pieces. I need to test every individual aspect of it. I need to test how they communicate together and I need to do end-to-end testing.

If I try to create composite apps and reuse all this technology, but it takes me ten times longer to test, I haven’t achieved my ultimate goal which was cheaper, faster and still high quality. So Unified Functional Testing is addressing that very challenge.

We've got Service Test which actually is incredible visual canvas for how I can test things that don't have an interface. One of the big challenges with something that doesn't have an interface is that I can't test it manually, because there are no buttons to push. It's all kind of under the covers. But, we have a wonderful, easy, brand-new reinvented tool here called Service Test that takes care of all that.

That’s connected and integrated with our functional testing product that allows you to test everything end-to-end in the GUI level. The beautiful thing about our approach is you get to do that end-to-end, GUI level type of testing and the non-GUI stuff all from one solution and you report out all the testing that you get done.

Bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.



So again, bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.

Gardner: Right. I was going to say we’ve heard a lot about Instant-On here as well. I am assuming that Sprinter might have something to offer there.

Sarbiewski: Absolutely. Sprinter is not even a reinvention. It's brand-new thinking about how we can do manual testing in an Agile world. Think of that Instant-On world. It's such a big change when people move to an Agile delivery approach. Everyone on the team now plays kind of a derivative role of what they used to do. Developers take a part of testing, and quality folks have to jump in super-early. It's just a huge change.

What Sprinter brings is a toolset for that tester, for that person who is jumping in, getting right after the code to give immediate feedback, and it's a toolset that allows that tester to automatically figure out what test apps are supposed to go through to drop in data instead of typing it in. I don't have to type it anymore. I can just use an Excel spreadsheet and I can start ripping through screens and tests really fast, because I'm not testing whether it can take the input. I'm testing whether it processes it right.

A bunch of cool tools

A
nd when I come across an error, there's a tool that allows me to capture those screens, annotate them, and send that back to the developer. What’s our goal when we find a defect? The goal is to explain exactly what was done to create the defect and exactly where it is. There are a whole bunch of cool tools around that.

The last point I’d make about this is called Mirror Testing. It’s super-important. It’s imperative that things like websites actually work across the variety of browsers and operating environments and operating systems, but testing all those combinations is very painful.

Mirror Testing allows the system to work in the background, while someone is testing, say on XP and Internet Explorer, five other systems, different combinations will be driven on the exact same test. I'm sitting in front of it, doing my testing, and in the background, Safari is being tested or Firefox.

If there is an error on that system, I see it, I mark it, and I send it right away, essentially turning one tester into six. It's really great breakthrough thinking on the part of R&D here and a huge productivity bump.

Gardner: Now, we have some major shifts impacting developers and basically the entire lifecycle around apps. There’s more emphasis on mobile suddenly, and there’s a need to integrate with the business process side of things. There's the need to do things faster. We're also seeing an emphasis on mixed sourcing or hybrid computing. Then, just to make things more interesting, there's an emphasis on bringing these things out in a way that helps the business faster, and in a more declarative way. That is to say, it hits the bottom-line in a good way.

What we hear from our customers is that they really do want their lives to be simplified.



How was it that this overall set of capabilities you’ve described fit into these trends? It seems to me that you are trying to strike the balance between inclusive, and integrated, but also agnostic and open to all of these different aspects. Tell me how that works.

Sarbiewski: What we hear from our customers is that they really do want their lives to be simplified, and the conclusion that they have come to in many cases is Post-It Notes, emails, and Word docs. It seems simpler at first and then it quickly falls apart at scale. Conversely, if you have tools that you can only work with in one particular environment, and most enterprises have a lot of those, you end up with a complex mess.

Companies have said, "I have a set of development tools. I probably have some SAP, maybe some Oracle. I’ve got built-in .NET, with Microsoft. I do some Eclipse stuff and I do Java. I’ve got those but if you can work with those and if you can help me get a common approach to requirements, to managing tests, functional performance, security, manage my overall project, and integrate with those tools, you’ve made my life easier."

When we talk about being environment agnostic, that’s what we mean. Our goal is to support better than anyone else in the market the variety of environments that enterprises have. The developers are happy where they are. We want them as part of the process, but we don’t want to yank them out of their environment to participate. So our goal again is to support those environments and connect into that world without disrupting the developer.

And, the other piece that you mentioned is just as important. Most customers aren’t taking one uniform approach to software. They know they’ve got different types of projects. I’ve got some big infrastructure software projects that I am not going to do all the time and I am not going to release every 30 days and a waterfall approach or a sequential approach is perfect for that.

Rock solid

I want to make sure it’s rock solid, that I can afford to take that type of an approach, and it's the right approach. For a whole host of other projects, I want to be much more agile. I want to do 60-day releases or 90-day releases or even more, and it makes sense for those projects. What I don’t want, they tell us, I don’t want every team inventing their own approach for Waterfall, Agile, or custom approaches. I want to be able to help the teams follow a best-practice approach.

As far as the workflow, they can customize it. They can have an Agile best practice, a Waterfall best practice, and even another one if they want. The system helps the team do the right thing and get a common language, common approach, all that stuff. That’s the process kind of agnostic belief we have.

Gardner: Last, Mark, tell me how you get started. When are these going to be available, and are there any changes in licensing or pricing in terms of trying to make it simpler for people acquire these?

Sarbiewski: They're available now. The great news is that today you can download all the solutions that we’ve talked about for trials. We have some online demos that you can check out as well. There are a lot of white papers and other things. You can literally pull the software 30 minutes from now and see what I'm talking about.

On the licensing side, we believe that the simplest approach is a concurrent license, which we have on most of the products that we’ve got here. For all the modules that we’ve been talking about, if you have a concurrent license to the system, you can get any of the modules. And, it’s a nice floating license. You don’t have to count up everybody in your shop and figure out exactly who is going to be using what module.

The concurrent license model is very flexible, nice approach. It’s one we’ve had in the past. We're carrying it forward and we’ll look to continue to simplify and make it easier for customers to understand all the great capabilities and how to simply license so that they can get their teams to their modules for the capability they need.

Gardner: Thanks to Mark Sarbiewski, Vice President of Marketing for HP Applications, for giving us the deep-dive on HP's Application Lifecycle Management news and its customer impact from the conference.

Sarbiewski: Thank you, Dana. I appreciate the time.

Gardner: And Thanks to you for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Barcelona, Spain.

Look for other podcasts from this HP event on the HP.com website, as well as via the BriefingsDirect network.

I'm Dana Gardner, principal analyst at Interarbor Solutions, your host for this series of Software Universe Live discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on application lifecycle management and HP ALM 11 from the HP Software Universe 2010 conference in Barcelona, Spain. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Friday, November 26, 2010

How to Automate Application Lifecycle Management: Conclusions From New HP Book on Gaining Improved Business Applications

Transcript of a sponsored BriefingsDirect podcast, the third in a series discussing a new book on ALM and it's goal of helping businesses become change ready.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Thanks for joining this sponsored podcast discussion that examines a new book on application lifecycle management (ALM) best practices, one that offers some new methods for overall business services delivery improvement. Complexity, silos of technology and culture, as well as the shifting landscape of applications’ delivery options have all conspired to reduce the effectiveness of traditional applications’ approaches in large organizations.

In the book, called The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle, the authors pursue the role and impact of automation and management over applications, as well as delving into the need to gain control over applications through a holistic lifecycle perspective.

In this podcast, the last in a series of three, we'll underscore the conclusions from the book and explain how organizations can begin now to change how they deliver and maintain applications in a fast-changing world.

In our first podcast, we focused on the role and impact of automation and management of applications, and emphasized the need to gain control over applications through a holistic lifecycle perspective.

The second discussion in our series looked at how an enterprise, Delta Air Lines, moved successfully to improve its applications’ quality, and gain the ability to deliver better business results from those applications.

Finally, here we'll discover how to access and how well you can develop applications as an essential lifecycle core competency and begin to chart a course toward improvement. That's just in time because the topic of ALM will be a big one at next week's HP Software Universe conference in Barcelona.

But we're here now with the book’s authors to explore their conclusions. Please join me in welcoming Mark Sarbiewski, Vice President of Marketing for HP Applications, and Brad Hipps, Senior Manager of Solution Marketing for HP Applications. Welcome to you both.

Mark Sarbiewski: Thank you.

Brad Hipps: Thank you.

Gardner: We're now at the point where organizations recognize that they need to do something differently. They have a very complex application situation, and they certainly have a fast-changing set of business requirements. The stakes are very high.

How then do companies know where they are in the app spectrum? Obviously, there’s going to be variability from company to company. Yet how do you know as an individual organization where you stand in terms of application lifecycle competencies? Let’s start with you, Mark.

ALM maturity

Sarbiewski: Companies are truly interested to understand where they rank, what they do well, where their gaps are, and where they fall against their competition, their colleagues, or other folks in their industry, and even against best practice in other industries. So we built out a model for ALM maturity, and it’s in the book.

We wanted to take a slightly different approach to how we thought about maturity models. There are lots of them in the industry, not so much around ALM, but in sub-disciplines or in different areas. Our focus was the business outcomes that you see at different levels.

If you can understand the results that you are seeing, that ought to help you figure out where am I in terms of where I could be. What we've seen is a progression from the spectrum of companies, where they are really getting started. They have fairly immature processes. They're across the lifecycle of an application, and all the way up to very advanced.

One thing I would mention, before I go further, is that the life of an application is generally the same for all companies. There is a spark of an idea: "We need this. We need software to help us do something in the business."

We make an investment decision somehow. We may do this ad hoc. We may do it based on who screams the loudest. But, somehow a decision gets made. We build something somehow. We spec it, build it, release it, run it, poorly or not, and hopefully, although certainly not always, eventually we replace it, retire it, and so forth.

So, our idea around maturity and tying it to outcomes is the results that we see. For example, what’s our batting average for how many times we actually make the right kind of investment decisions? How many times do we execute against a good investment decision? How many times do we run it well and meet our SLAs in production and so on?

We see people just getting started, and they have a relatively ad hoc, narrow, point tool, with lots of manual work. It doesn’t mean they are never successful, but results vary highly. They're very mixed. Some project teams are great, and it all depends on the project team, and the next one may stink.

As you move up the curve, you start to see a maturity in the functional disciplines. We see them get better at requirements management. We see them get better at testing, designing software, or handing off, releasing into production. You see the functional competence begin to evolve. That has to happen first, before you can start to tie these functions together and begin to get cross-functional excellence.

There is a huge benefit in getting good at your functions. And, there is another big jump in return on investment (ROI) of getting better at having my functions and departments work well together. At the highest level, you start to be able to execute very complex programs, with lots of projects, across lots of functions every time. We talk about a level of portfolio excellence there.

So, it all comes back to the results. What kind of results am I seeing? If you look at the model in the book, it’s pretty easy to peg yourself as to where you are and the kinds of benefits you'd see from moving up that maturity curve.

Gardner: Brad Hipps, do you have anything to offer further on knowing where you are so that you can know where you need to go?

More of a scorecard

Hipps: As Mark has said, we configured this model, trying deliberately not to be ultra-prescriptive. There are many heavy-duty models that do exist, and people can dig into those to their heart’s content. This is as much a maturity scorecard as anything.

One of the examples that you might see or one of the ways you might begin to engage yourself is something like defect leakage. Defect leakage refers to the number of defects that you discover in live in the application that you could have caught earlier.

We have some figures that show that the average is in the neighborhood of 40 percent of application defects that leak into production and are discovered in live. They could have been caught earlier. It may be little higher than 40 percent, which is a fairly shocking number. Obviously, that’s a rough average. So, you've got to expect, if you are lower in maturity, that you may be even seeing more than that.

But on the high end, the world-class customers we worked with, see less than 5 percent of defects working their way into production. So right off the bat there, you're talking an 80 percent-plus drop in the number of defects that you're experiencing in a live environment, with all the attendant cost savings, brand improvement, and good will in the business that you would expect.

That’s one example of the kind of thing that you can look at, tease out, and begin to get a sense of where might I sit maturity wise. From that, you can potentially take a cue as to where is it that I want to start, where is it that I want to make the biggest investment, as I look to make myself more mature?

There are hosts of sophisticated KPIs we can design for ourselves, but one of the key ones was, "I want to know what the business thinks of us, and whether we are trending in the right direction."



Gardner: Brad, I suppose while it’s important to know who you are in order to chart where you are going to go, it would be nice to know how well you are doing along the way. Are there measurements of success here in your book that you can point to of how people can take score of how well they are progressing and then reinforce or move even further forward?

Hipps: I'll give a simple one. At least, I hope it’s a simple. I can’t speak for every enterprise, but this is one that I have used in my own history, and it’s no more complex than customer satisfaction.

In this case, your customers may be end users, who are harder sometimes to survey. But, more often than not, your customers are some business units, somebody within the business.

When I was running application teams, we were undertaking initiatives to improve ourselves, which is probably a nonstop undertaking within IT. Sometimes, you go through peaks and valleys, but that became one of my key checkpoints, as you might imagine. There are hosts of sophisticated KPIs we can design for ourselves, but one of the key ones was, "I want to know what the business thinks of us, and whether we are trending in the right direction."

Trumping the frustration

The reason that’s a good one for is that no amount of being a good guy, being nice to people, or being friendly in meetings, is going to trump the frustration a business person feels if the application is not doing what they need it to do. Either it's got too many defects, it takes too long to enhance it, or it’s too painful to get anything done, etc. There are a host of things.

So, we designed a relatively simple customer survey. It was something we executed, probably biannually, and that became one of the ways we tracked how we were trending. Are we going in the right direction? There are endless, complex KPIs, but that’s a simple one I would pluck out as being a way of simply tracking, "Are we getting better or worse, or are we just sort of treading water?"

Gardner: And, Mark, when we look at how progress has been made, we need not only look at the end-user perceptions and results from surveys, but perhaps we also need to look at the development team, the ops team, and the actual practitioners here. So, is there a way of gauging success based on what the team does and how well they're able to let go of the legacy mechanisms they've had over the years?

Sarbiewski: We talk about this a lot. We see pressure from the business to change how we do things and the technologies we use. From the business side, you see it in a variety of ways. You see, "Oh, it’s the consumerization of IT, and what I see in my consumer world I want in IT. I see this all moving fast and I don’t feel my business moving." You see that pressure.

But, you absolutely see pressure to change from the bottom-up, from the teams themselves. We want to work in a different way. We want to be able to execute faster. The whole move of agile has been, in large part, if not primarily built, then driven from development and delivery teams up. So, there is a huge motivation there.

You can start to look at some things like that, and as you see improvement, not only in the responsiveness, but the as number of issues go down.



And they are going to look at a variety of things. They are going to look at things, as Brad said, like customer satisfaction as part of that. How quickly does a change and a request get turned around?

That’s a pretty easy metric, because the changes come into systems like the service desk. There's a request. When did that thing get requested, and when did it actually get executed? You can start to look at some things like that, and as you see improvement, not only in the responsiveness, but the as number of issues go down, those are things that the team should be looking at as great measurements.

I often counsel clients to set up some MBOs and some rewards structures around that too, because this is something that the business is going to feel. It’s not just what’s there at release. What did we find here and how is it? It’s really that first 90 or 180 days of use in the business. I'm going to take a snapshot, and if it’s good, if we are constantly improving on that and hitting our targets, that’s where we get our bonuses.

It’s that result. And it shifts it from, I hit my date, I threw it over the wall to ops, and I washed my hands of it. No. We're all in business here. We're not in IT. We're in business, and business means this thing is running like it’s supposed to. That means apps and ops combining and taking a measure at 180 days.

There’s a lot of pride when you see the metrics go in the right way. The feedback that I've seen for our clients that do this really well is where the business comes back and says, "Oh my God. The responsiveness is incredible. Even if I'm not getting the massive stuff that I used to get once every two years, I'm seeing movement on a regular basis, and I love it." And lot of clients that we talk to are really fired up about that.

Those are the kinds of things to strive for, look for, and really have a great feedback loop for on your delivery teams.

Important points


Hipps: There's an important point there. As people know, there are an endless numbers of KPIs that are available to you, and all sorts of people who recommend which ones are the best. We probably didn’t make it this explicit in this version of the book, and maybe this goes in the next revision, but when it comes to how you're going to measure your success, I'd look at a few things in terms of the kinds of measurements you want to track.

First of all, I don’t know that I would pick more than three or four. You find yourself with six, seven, eight different things you are trying to stay on top of and measure, and it becomes its own game. I would keep it as simple as humanly possible. Three is a nice number of measurements.

The second thing is, I would want it to be pretty doggone intuitive what the business value is, if we are doing well in the measurement. I wouldn’t want to have to go through too many mathematical steps to get back to what the value is to the business, as I look at whatever measurement I have chosen to evaluate myself by.

The third aspect would get to what Mark was just saying was in an ideal world at least one of the measurements, if not all of them, and would speak to how well we're working pan IT. It's not just how well we're building the application or how quickly we're getting it pushed into operations hands, but how well we're working together as teams, as developers, as folks on the operation side, as planners, and as enterprise architects.

Mark was talking about looking at meantime from change request to production. Well, that's an example of the entire IT supply chain right there. Presumably, if I set some relatively easy target and start trending in that direction, then I can have at least some sense of satisfaction. We must be getting better about interoperating the way we didn’t operate.

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

Gardner: Thanks, Brad. Back to your conclusion in your book, you not only try to explain how people can assess their own progress, but you also look to what you consider world-class delivery organizations and try to learn from what they do well.

You have four traits that you point out, predictability, repeatability, quality, and change readiness. Mark, maybe you could drill into these and explain why these proved to be so important for these top players?

Sarbiewski: We've done numerous surveys, there are lots of other surveys out there about what the business is asking of these application teams and of IT in general. For the last couple of years, it's been pretty consistent. Surprisingly, to some degree, agility and innovation are right at the top of the list. Cost is up there in third place in our surveys. So, it's hugely important, but it's not the first thing, and that's almost a little counter-intuitive.

What we hear from our clients is that things are hyper-competitive and that technology, in particular software and applications, is a huge competitive advantage. So, our ability to move fast and beat our competitors to the punch with capability is enormously important.

You turn that around. Suppose I'm an application executive and I own this problem. How am I going to deliver that to the business? I've done all kinds of things to try to make that happen. I've brought in automation. I'm bringing management. I'm outsourcing to drive cost down. I'm adopting new technologies for this rich experience. I'm introducing a whole host of change to meet those business objectives.

Have to deliver

But, at the end of the day, I've to be able to deliver every time. I got to be able to know when I'm going to deliver. That's absolutely critical to it -- to deliver agility. What it means to me as an App owner is that I'm ready to make change. And that's a big statement.

So the change readiness comes in. Have I architected for change? It's not just that my people are ready for it, my processes are good for that, my software itself is changeable, and I have automation where I can make a change and know if I have broken something else.

I'm trying to deliver that innovation and agility for the business. I've introduced a whole host of things to be against that, and I have to manage this in an extraordinary way, in a different way than I've done this in the past. What's going to help me to be able to predict where I'm going to land, repeat this for every project I get, and be change ready and not sacrifice quality.

I've to do all that and keep quality high. Those becomes the North Star principles that I want to keep my team focused on and thinking about how things like being change ready are facilitating the agility that the business wants.

Gardner: Brad, change ready really resonates, nowadays. We've got cloud computing in many people's minds as something important for them to be focused on. We've got mobile computing and how that impacts enterprises and their processes. And we're looking more at sort of the social business with collaboration and rich sharing of data and information.

They're always on. There is nothing I can do in a business that isn't going to touch the application.



So. change readiness seems to be the norm or perhaps am I overstating that?

Hipps: No, I think that's right. Speaking from the application domain, our friends in the agile communities have been the leading champions of this notion for a long time in applications. Our default stand was one of being change averse.

By that, I mean that there was this whole contractual relationship with business. You tell us what you need, and we're going to document it as best as we can, down to having all the semicolons in the right place.

We're going to break out the quill pens and ink our signatures. Forever shall it be, and if you change anything here, we're going to hit you with the request for change, and it will go through a cycle of six weeks and maybe we'll agree to it, etc., etc. The longest time was the mindset. You can look at that and say it's awful, but when I had far fewer applications, and they took far longer to build, it was just the way of the world.

The recognition today for all of the reasons we've talked about in this podcast and others, our applications are everywhere. They're always on. There is nothing I can do in a business that isn't going to touch the application. It fundamentally means, we need to sweep from the table, that notion of being change averse. Instead, we need to be in a position of embracing change. We do need to be change ready.

It's not that the business is going to sit back and say, "You're right. We're sorry. We won't ask for so many changes." This isn't going to happen. From an IT, from an app perspective, we need to be oriented and positioned, rather than see change as something that we need to fear or protect ourselves from. It needs to be something we need to embrace as a fact of life.

The leading traits

As Mark said, we need to be architected and engineered, from our people process technology perspective, to put ourselves in a position to be that way. In the book, we talk a bit about some of the principles we think come into play for change ready organizations. But, that's why it is one of the leading traits, the leading principles, in world-class organizations.

Gardner: Okay, we've talked about an awful lot, and this book encompasses an awful lot. It might be difficult for people to get a handle on where to start, but you've addressed that as well. You've conceptualized this along three lines: think big, start small, and then scale quickly and adapt. Let's go through these. Let’s start with you Brad. Think big -- what does that mean?

Hipps: It could be a mantra of sorts, Think big, start small, scale quickly. The basic idea of think big is the idea that you want to spend some time making sure that you’ve all got a shared vision of where you want to be, and we talk a bit about whether that was a maturity model -- these principles of predictability and repeatability, etc.

Hopefully we've set at least some suggested guidelines for constructing what your end state might look like. But, this point about thinking big is that, as we all know, certainly in IT but probably anywhere, it's every easy to fall into a state of analysis paralysis. We've got to figure out exactly the right metrics to decide exactly what we're going to be. We've got to figure out precisely what our time line is.

We sort of can borrow from our friends in agile, who have said that you've got to understand the perimeter of what it is you want to accomplish, but it's bound to change. Those perimeters are bound to shift. You're bound to discover things about yourselves, your organizations, what's feasible, and what's not in the process of actually trying to get there.

It's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.



So, it's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.

If, you then pluck the low-hanging fruit, the easy things we could do starting this week, starting tomorrow, to advance us at least generally toward these ends, this end objective, that's great. Then, it becomes a matter of just continuing to move, scale, and adapt.

Somewhere, we make the point that, as an application team, certainly at least as an application member, I cared a lot more about measurable progress, seeing things actually advancing and getting better. Then, I cared less about how shiningly brilliant the end-state was going to be or exactly how we were going to get there.

I was far more interested in generally getting a sense of what our North Star was, and then getting going, and actually seeing progress. So that, in a nutshell, is what we mean when we say, think big, start small, scale quickly and adapt.

Gardner: Mark, any further thoughts on this philosophical approach to this issue about the lifecycle of application?

Unconscious sabotage

Sarbiewski: Absolutely. I spent a number of years in a former life doing process change for companies. There were some trade secretes in the firm I worked with. They recognized some unchanging facts that that people can consciously or unconsciously sabotage the greatest plans, any process you want, or any kind of a change.

You have to start with people. It does involve all the people-process-technology in that order, but it's the people considerations. Do we have that shared vision? Who are the skeptics? Where do we think this could go wrong? Are we committed to getting there?

There were some questions we’d as we were embarking on making this change. First of all we said, what project or what pilot -- if we did these changes on it -- would people in the organization say, "If it works for that project, it will work for us as an organization."

So, find that visible pilot project, not one that’s an exception. Don’t find one where there are four developers and they are in the same room. If you try something new, people can say, "Well, of course, it worked for that, but that’s so atypical." So, find that project.

Beyond that, find the champion who is really respected in the organization, but skeptical of the change. We would go looking for one or two people who were open-minded enough to really give it a go, but maybe steeped in how we’ve done it, and have been very successful in how we’ve done it. Then, people can say, "That’s the kind of project we do, so you need to be able to make it work there. If Joe or Mary or whoever it is, if they buy into and it works for them, I believe."

The one other thing I’d say is start thinking about those types of metrics, those cross-silo and lifecycle-oriented goals and metrics.



The one other thing I’d say is start thinking about those types of metrics, those cross-silo and lifecycle-oriented goals and metrics. We talked about ones just a bit ago where we reward our delivery teams after six months of being live. Maybe, let's reward jointly the operations and the dev teams, if they’ve met those customer satisfaction goals, those service level agreements (SLAs), and those low counts of defects in production. You start to create a different dynamic, when you think more about lifecycle goals and cross-team goals.

Gardner: Now, I know these books involve a tremendous amount or work, and it’s something you really have to pour your heart into. Brad, the last question to you. What do you hope happens as a result of this book?

Hipps: The spirit of this book, and probably the spirit of a lot of these kinds of books, is that our hope is that somebody gets this book, and maybe doesn’t read it cover to cover. That’s okay. They pick and choose their places, but they take away one idea that’s actually implementable. If I have one hope, it’s that we haven’t been so pie-in-the-sky in our thinking that somebody reads this and says, "Yeah, nice idea, but it will never happen here."

So, that would be my hope -- somebody takes one single way that’s implementable in the near term within their organization.

Gardner: And in fairness I should offer the same question to you, Mark. What do you hope happens as a result of the book?

Sarbiewski: Do you mean not The New York Times bestseller list. I can’t hope that.

Gardner: Regardless of its reach.

Software is important

Sarbiewski: What I’m hoping is that in these hundred or so odd pages that executives in these enterprises that we're talking to have that opportunity to take just a couple hours and have somebody give them a chance to think about how important software is, and what the true life of an application is.

Once you start to go down that path and you start to say, wait a minute, 10, 15 years of evolving this capability, what does that mean? When things are live and I’ve got hot request from the business to make a change, what needs to happen? How much money will I spend on that?

The one "aha" moment is seeing that the 12 to 15 years matter, when I’m delivering value to the business and innovating for the business. In order to be successful during those 10 to 15 years, I will make different decisions when I build this thing. I will focus on a process.

I will build the automation to a different level, because I’ve stopped thinking that my job is done when I go live. If that’s truly the job, you’ll make a lot of shortcut decisions to get to go live. But, if you think bigger, you think about the full life of an application and what it delivers to the business. All of a sudden, it makes a whole lot more sense to do things a bit differently, to set myself up for 10 years or 15 years of success with the business, as opposed to a moment when I can say, "Yup, I achieved a milestone."

Gardner: Very good, but we have to leave it there. We’ve been examining how our shifting applications and IT landscape provided a huge opening from proving how applications are built, consumed, and managed using new application lifecycle management methods and concepts.

I want to thank our guests, the authors of our book that we’ve been discussing. We’re here with Mark Sarbiewski, Vice President of Marketing for HP Applications. Thanks so much, Mark.

Sarbiewski: Thank you.

Gardner: And also Brad Hipps, Product Marketing Manager for HP Applications. Thanks to you, Brad.

Hipps: Thanks, Dana.

Gardner: This is the last in the series of three podcasts on ALM; we’re examining a new book on the subject, The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle. It offers some powerful methods retaining overall business services delivery improvement. Thanks for joining our series, and we hope you have a chance to get the book and examine it in more detail.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast, the third in a series discussing a new book on ALM and it's goal of helping businesses become change ready. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: