Monday, October 16, 2006

Transcript of Dana Gardner's BriefingsDirect Podcast on Application Development Quality

Edited transcript of BriefingsDirect[TM] podcast with Dana Gardner, recorded Oct. 2, 2006. Podcast sponsor: Borland Software.

Listen to the podcast here.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today a sponsored podcast discussion about IT quality, and the importance of quality as an essential ingredient in any application development activity -- and not necessarily as an afterthought or an after-effect but quality throughout, from inception and requirements, right through production and then to lifecycle. Joining us in this discussion we have a representative from Borland Software, Brad Johnson, who is the director of product marketing. Welcome to the show, Brad.

Brad Johnson:
Thanks, Dana.

Gardner: Also joining us is a practitioner of quality in the application development process, Chris Meystrik, the vice president of software engineering at Jewelry Television in Knoxville, Tenn. Welcome to the show, Chris.

Chris Meystrik:
Good to be with you Dana and Brad.

Gardner: Okay, we’ve heard a lot over the years about software development, and we’ve seen study after study that shows abysmal rates of on-time performance, of being over-budget, of requirements that don’t seem to make sense once the application gets into production. Obviously, not the best track record, and yet increasingly -- because companies are online with their marketing, they are online with the way they actually produce and deliver services, they are online with the way they acquire their resources through a supply chain -- it seems that application development is more important than ever. What do you think we need to do here, Brad, to make this a high-quality process from start to finish? What's been missing?

Johnson:
Well, I think what's been missing is a real focus on quality -- essentially from day one, where organizations are thinking about building out complete requirements that include attributes of performance and functionality from the very beginning; when they start thinking about a project. As they move through the project lifecycle, they need to think about architecture and development. The testing organization, which very often is responsible for verifying quality at the tail end of a project, is actually involved earlier in the cycle. We need projects that essentially capture what the business users want, and also capture quality attributes so that as that development process flows across the whole software delivery lifecycle, quality is considered a priority from the very beginning.

Gardner:
So we’re not really just talking about the quality of the code itself. We’re talking about the quality of the process, the people, the organization, and the tools -- everything that lines up before the code. Is that the way to think about it?

Johnson:
That’s absolutely important. Thinking about the process, Borland looks at it in four dimensions, from managing the whole process to planning everything; from what the requirements of the project are, through to verification and validation, which as most people understand, is much bigger than just testing. It’s exactly as you stated. It’s looking at the process, it’s putting in gates and checks that include peer reviews, and making changes to the process -- iteratively if necessary. So you’re right, it’s looking at the holistic view of a project as it’s delivered, and quality is essentially built-in, but it’s not independent of testing.

Gardner: I suppose when you had an environment where all of your developers were within a stone’s throw of one another, and there was a single code base to work from, with centralized check in and check out, and you could manage component-by-component activities in a tight-knit group, that would be one issue. It's one problem to inject quality into that process.

But we have a much different environment today, where we’re seeing development teams that are dispersed geographically. We’re seeing much tighter timelines where code is getting checked in and checked out, and there are implications for how that impacts architecture and infrastructure earlier in the process.

We’re also seeing distributed computing environments where we’re going to have a heterogeneous runtime environment, most likely. Now that we’ve involved ourselves with the complexity of these distributed environments, is there a team approach to this? Should we think about it as check-in and checkout from a decentralized geographic standpoint, or do we still need to have a sort of a centralized approach? Is this a monolithic quality process for a distributed development process?

Johnson:
A centralized kind of quality control center is a great idea. In practice and reality, I think we're far away from that. But you hit on what's absolutely a significant pressure on organizations today. That is, we’ve actually queried prospects, customers, and partners over the last few months, and distributed development is not just coming, it’s here, and it’s one of the biggest challenges faced by development teams today.

So, what you really need is a centralization point, where everybody is getting the same view of quality throughout the whole process … whether code is being checked in by the teams over which we have very little control -- where they are either outsourced or off-shored. Or they are under our control as far as being part of the organization, but distributed geographically and across time zones. You need to be able to understand, from a development standpoint, what exactly is being checked in, and what type of quality gates have been put in. So that when code gets checked in, we know that it’s free of security risks.

We need to know that there is no open-source code, for example, that’s in there, which is going to cause problems later. Hopefully, we’ve done the right level of J unit or N unit, or even functional automation on that early-stage code, so that at least we know that when code is added to the build, that build is not at risk. So, if we’re looking at that from a worldwide perspective, it's very, very challenging.

Gardner: So a brittle sort of top-town quality approach doesn’t seem to be the right fit for today’s development environment. I suppose we need something that’s got multiple levels, multiple components with an ability to integrate, but also to pass the baton, if you will, from one aspect of development to another. I think we all refer to this sort of multifaceted, flexible, and yet controlled environment for production as Lifecycle Quality Management (LQM). Or am I missing the point of what LQM is about?

Johnson:
No, you haven’t missed the point, you’ve nailed it. Lifecycle Quality Management is about exactly that, looking across the software delivery lifecycle, and thinking about quality throughout the whole process. And that includes institutionalizing methods of assuring that requirements are correct, the code is complete, and as many defects have been as minimized as possible. Again, it’s looking across the lifecycle, and not just at one specific place in time, or one specific organization that’s contributing to any of those aspects.

Gardner:
All right. That sounds like it makes tremendous sense. Of course these things are easier said than done. Let’s talk to the practitioner here. Chris, tell us a little bit about Jewelry Television and what your role is there as vice president of software engineering?

Meystrik:
Jewelry Television is based in Knoxville, Tenn. We are a multi-channel sales vehicle. Our biggest sales channel is our cable television programming. It’s in more than 50 million homes across the United States. We have an ecommerce-based Web sales channel also. The vast majority of our calls come into our call center here in Knoxville, and we offload a good percentage of that to the ecommerce Website, and through interactive voice response systems here at Jewelry Television.

When I came in here, my role was to basically rebuild the enterprise infrastructure, specifically around the business that Jewelry Television is in. They’ve had experiences looking at competitors, some of them they have bought -- others are still out there in the market. They believe we need a customized service-oriented solution to tackle the marketplace, and basically give Jewelry Television a strategic and tactical advantage, to be better than anybody else in our industry. And my job right now is to build that infrastructure for Jewelry Television.

Gardner:
So, IT is integral to your organization, not just for the ability to take in orders and deliver them, but you also acquire. And, if I understand it correctly, you’re also in the jewelry production business in terms of buying jewelry around the world and adding custom elements to these pieces. So, you’ve got a supply chain, and then you’ve got a distribution channel you've got to manage. Is that right?

Meystrik:
That’s correct. And we have an inventory warehouse that we have to manage in the middle there, as well as customer relationships from an ecommerce perspective, a call center perspective, customer service, merchant services, the whole business. One of our biggest strengths is our buying channels. Some of the best buyers in this industry are here at Jewelry Television. Some of the best show hosts that know how to sell jewelry on the television are here at Jewelry Television. It’s a world-class environment; we need world-class systems that help this company keep growing.

Gardner: So you’ve got a jewelry lifecycle quality management problem, right?

Meystrik:
It occurred to me, that while Brad and you were conversing about the code, and you made the comment that it really isn’t about the code -- to me it isn’t at all about the code. All the code is doing is saying, "Our buyer’s made some decision," or is potentially going to make a decision on some piece of jewelry or gemstone. And somehow that piece of jewelry, or that bulk-load jewelry, has to make it to Jewelry Television.

When it comes, I’ve got to account for it. When we account for it, we’ve got to put it away. When we put it away, we’ve got to be able to sell it, and tag it and picture it. And then we have to be able to manage it when somebody actually buys it, which might not be for another year, depending on what we bought it for. Maybe it's instantaneous, and we’re buying it out of bulk. All that stuff has to be managed, and the code just does that. In order for me to test the quality of my software, I have to prove that that process works; I have to do a real live end-to-end test.

Gardner: Give us a sense of the scale here. How many pieces of jewelry do you guys move in a typical month, or maybe even in the fast months, like just preceding the holiday shopping season?

Meystrik: Preceding the holiday seasons, we will do on the order of 40,000 orders a day. And those are previous numbers. It may get bigger. We’re anticipating a good holiday season.

Gardner:
Give our listeners a sense of the type of concern we have here. What are your annual revenues at this point?

Meystrik: We are a private company, but our annual revenues are somewhere around $500 million.

Gardner: So, how do you keep the trains running on time in your application development, and deployment IT infrastructure?

Meystrik:
People are the key, it turns out here: Having high-quality engineers that really understand the landscape, and can think out a couple of years and understand where we’re trying to get to. We tried several solutions when we first got here. They didn’t have a lot of processes in place, and we went for a waterfall approach. We really wanted the business to define to us what it is they were trying to build.

If those projects were small, they were fairly successful. If they were medium-size, we could do them pretty well. In some of those bigger projects, time just ran on, and time ran on, and by the time we got to the end of them, we weren’t dealing with the same problem anymore.

When that happens, the software wasn’t necessarily satisfying the solution you intended for it. So, what we’ve done is to move to a very agile, iterative development process, where quality has got to be a part. At the very beginning of this process, from requirements even in pre-discovery, we have QA engineers, and QA managers onboard with the project to get an understanding of what the impacts are going to be. With this we can get the business thinking of quality in the very beginning, with our product managers, and project managers getting a bird’s eye-view of what a real-life project schedule might look like. From there on, our QA is heavily involved in the agile process, all the way to the end, measuring the quality of the product. It has to be that way.

Gardner:
What do you look for in a vendor, in the Application Lifecycle Management (ALM) space? What is that you need to make these multi-dimensional problems manageable?

Meystrik: We need the vendors to supply us with products that are open, products that will communicate with one another at every phase in the lifecycle of our product development. We have requirements engineers, product managers, and project managers -- both in the initial stages of the project together with the project charter -- trying to allocate resources, and then putting initial requirements together.

When the engineers finally get that, they’re not dealing with the same set of tools. The requirements engineer’s world is one of documentation and traceability, and being able to make sure that every requirement they’re writing is unambiguous and can be QAed at the end of the day. That’s their job at the beginning.

When that gets pushed off into engineering, they’re using their source code management (SCM) system, and their bug and issue tracking systems, and they’re using their IDEs. We don’t need them to get into other tools. All these tools need to coexist in one ALM framework that allows all these tools to communicate.

So, for example, within Eclipse, which is very, very popular here, you’re getting a glimpse of what those requirements look like right down to the engineers’ desktop, without having to open up some other tool, which we know nobody ever does. Without that, you have this barrier to entry that you just want to avoid. The communications are heavier.

When it comes to traceability, you want traceability all the way down to the source-code level, from those requirements into Subversion, which is the tools we’re using. All the way down into generating test plans out of requirements, our QA engineers are not using the requirements tool; they are using automated regression testing tools and automated performance testing tools. They want to write their test plans, and have bidirectional input in and out of the requirements tools, so they can maintain their traceability. So, all across, it has to be communicating and open.

Gardner:
Now, it sounds to me as though you’re asking for a lot here. You want openness, you want interchangeability, but you also want full visibility, soup-to-nuts. You want traceability, and in sense you want both integration and openness. Let’s bounce it back to Brad here. When your customers say that to you, how do you fulfill that sort of a need?

Johnson:
Well, it’s not an easy request, but the reality is that we live in a world today where development organizations have to put together a set of very heterogeneous tools. The core of LQM is our test management framework, which was developed was to be very open and to support many types of technologies.

Specifically, we realize that there are a lot of test automation tools out there, and many companies have made investments with other vendors, as well as open source vendors. So our objective around test management, therefore, is really to be able to support other vendors' test automation as well as our own.

We’ve brought in the Silk products from Segue, and certainly the integration between our test management and test automation tools with SilkTest and SilkPerformer is the best, right. But we realize again that there are many other types of testing out there. So test management needs to be the core harness of however organizations are doing their testing.

We’ve realized one thing that’s very important: That 80 to 95 percent of testing is still manual. So we’re spending a lot of time working on how we better enable and make more efficient the manual testing process, as well as the test automation process. From an openness standpoint, we are very focused on creating a platform that supports whatever customers are doing around testing.

Expanding that further, we also realize that as far as requirements management and source-code management, and so forth, there are other solutions in the market as well. So our integration strategy around those products is to support what’s out there, and what’s leading in the marketplace. But we really make sure our holistic platform includes requirements management and SCM, and test management and is seamless and deeply integrated from the beginning to the end.

Gardner: You’ve just come out with some announcements in early October -- the Borland Lifecycle Quality Management Solution -- and it’s a framework approach where you have interchangeability, but you’re also, of course, trying to toot your own horn about what you think are your best-of-breed components within that framework. Give us a rundown, if you would, of what this Lifecycle Quality Management Solution consists of, and how it helps folks like Chris at Jewelry Television manage their complexity, while also giving choice?

Johnson:
Sure. First I want to reiterate that recognizing that there’s a need for a lifecycle approach to quality is really the first step. Getting the right management buy-in across the organization, developers as well as business leaders, needs to be the first thing you do before you really think about changing the way you’ve approached this. The other thing is, as companies make decisions about going Agile, these issues need to be considered.

The process I’ve already defined is very important. We understand that, and we understand how to get there from where we are today.

From a technology standpoint, the platform that Borland is delivering allows business users, and business analysts to capture a requirement correctly from the very beginning. We use a workflow-based definition and elicitation tool within our LQM Solution that lets users do that very, very well -- and lets them write in the quality attributes of that requirement. And then they can even take that basic requirement and develop and generate -- automatically generate -- test cases to put into our test management harness. We also deliver a full enterprise-class requirements management framework that is now deeply integrated with our test management framework.

So, as Chris was mentioning, we’ve got bidirectional traceability now between a business requirement that’s driving the project, and the testing requirements for the whole QA process in organization. And then we’re integrated on the test management side with our SCM solution, so that all of the test assets that are created during the testing process can be versioned and source-controlled as well as providing a defect-management capability -- so defects are rooted out of the application throughout the whole process.

So, the technology stack is really a deeply integrated platform for traceability from the very beginning to the very end of the testing process.

Gardner: It sounds as if there is actually a continuum of suites here, right?

Johnson: There is. We have the Silk suite from the Segue acquisition, which is growing and blending into the Caliber suite for requirements definition, and management, as well as our StarTeam suite for software configuration management. Our strategy moving forward is just to continue to make the integrations more seamless, and more valuable.

Gardner: Okay, back to Chris in Knoxville. When you hear this -- and this is just been out, so I know you don’t have it in place necessarily -- does this sound like a good fit? I know you do use some of the Borland brands and products now. Does this give you a big impetus to then say, "I’ve got to have this framework in addition to these individual suites and products?"

Meystrik: It gives me an impetus to go check it out. I am not a very big monolithic purchaser or consumer of technology goods. I like very open frameworks, and to that end we bought Segue before we even thought Borland was in the picture. We thought Segue was the best tool on the market. We did a long evaluation on automated test suites, and now it’s a part of Borland.

We purchased a different requirements management tool that did not work out for us because it didn’t have the openness that Brad’s talking about. It didn’t have the integratibility of the CaliberRM product. It just turned out Caliber was Number 2 on our list. We didn’t do another evaluation. We picked the Borland product.

We had already purchased Together for IDE integration to Eclipse for all of our architects. So we are a fairly large Borland customer. But for all of the listeners out there saying, "Well this guy, he’s into Borland, so of course he is on the call," we kind of stumbled into the whole deal by making some really good decisions. And we believe that the integration between CaliberRM and Silk tool is going to be outstanding. We’ve seen it in action, and it keeps getting better. We’re installing the Caliber products very, very soon to overtake the product that we had in-house, and we’ve had the Silk tools for over a year and half now, and we love them.

Gardner:
And what about the use of this framework in conjunction with non-Borland products, I assume that you’ve got some of those too?

Meystrik: We do, and I’m hoping that really works out. We’re very big users of open-source products. We have Subversion as our source code management system, and we actually have made some open-source modifications to the Bugzilla tool for tracking; what we call action request, things that we do to our system. We have heavy traceability between Subversion and the Bugzilla implementation here at Jewelry Television. And we hope to move that traceability all the way up into CaliberRM, and thus all the way over into Silk.

I would like to know when somebody -- when some senior executive -- comes to me and says, "We’ve got this high-level requirement wrong," I want to understand the impact of that all the way down to my source code, and how much quality the QA team already put into uncovering SLAs, and things like that within the code. How much rework is it going to take? I think they’ve got a solution that’s going to do it.

Gardner:
Now, within Jewelry Television what group is your biggest problem? And I don’t mean that in a negative way, but I mean, if Monday morning comes around, and you go in the office, who are the people you don’t want to see? Who is your most troublesome internal constituency, and I don’t need names, but sort of a department level or something like that?

Meystrik:
I think not in terms of trouble, because we just value our customers across Jewelry Television. This is an amazing place to work, and all the people here are brilliant.

But when I come in and I hear, for example, that we had a huge Saturday and that the call center had to go on paper tickets for an hour. What I know is I have 300 or so people that did not have a very fun hour on Saturday. I take it personally, and I want to fix that problem. That’s a big impact.

There are other areas too. You could have some backups in inventory that affect your customers. All this stuff is very customer-facing. I would say that right now, that’s the area we’re really, really focused. We’ve got a new order management system going out within the next 30 days.

Gardner:
And come Monday morning, you can’t go to those people who are doing the paper tickets, and explain to them that you didn’t have sufficient visibility into the requirements process due to a lack of integration between two disparate tools vendors, can you?

Meystrik: No, no. In fact I’m not even sure they would even know what that meant, just because they don’t know the IT world. They'll still say, "We still ran paper tickets."

Gardner:
They just want to know that it works, right?

Meystrik:
Exactly. They want to know that it works, and they want to know the process is going to get better. They want to know their talk times and their queues are going to go down, and that their customers are going to be significantly happier. We’ve got a lot of repeat customers here, and we want to know we can take care of them quickly, and they can do more shopping on the phone because the systems are more efficient, because we’ve thought about quality from the very beginning. We understand the SLAs that have to happen.

Gardner: Let's look to the future a little bit. You mentioned early on, Chris, when you were describing what you do, that services orientation was important to you. It strikes me that quality, while important in a distributed environment, is absolutely critical in a Services Oriented Architecture (SOA), or distributed services environment. Do you concur with that, and where do you see the importance of quality going as you move more into these independently created services?

Meystrik:
I absolutely concur with that. When you put out a SOA and you start doing B2C, and B2B, and business-to-vendor communications on a broad scale -- you basically open up the kimono and say, "You want to communicate with Jewelry Television? This is the service framework that you can use to talk with us. Have fun."

That means that any one of those components that you don’t even know about could be the centrally hit hot spot in the system. You’ve got to be able to uncover, first, what is that hot-spot going to be. And if that is a hot spot, what kind of SLAs do you have to put around that thing? You have to know as you scale up -- whether it’s a down-stream call, or an object, or a method that’s being called by more web services than you thought -- you’ve got to be able to uncover what that hot-spot is and test it, and you’ve to have SLAs around it.

Brad mentioned a little while ago that too many people are doing manual testing, and I concur. We do too much of it around here, too. If we’re doing an Agile method in iterative development cycle, we can’t replay manual testing stuff through every iteration. It doesn’t work. Our QA development step has to become iterative also, they have to be able to manually iterate the first step, and then payload that thing into a regression test in the Silk tool.

This quality becomes just paramount when you go to SOA, mainly because there is non-determinism involved. You don’t necessarily know what your customers or vendors or other businesses are going to think is the most valuable service until you deploy them and they use them.

Gardner: Okay, well, super, I think we’re about out of time. We’ve been talking about the importance of quality in application development, about the necessity for a framework that allows interchangeability, but also coordination management and visibility. And helping us along on this discussion journey has been Chris Meystrik, he's the vice president of software engineering at Jewelry Television. Thanks for joining us, Chris.

Meystrik: Thanks Dana, I appreciate it.

Gardner: And also joining us from Borland Software: Brad Johnson, the director of product marketing, and he’s been talking about the new Borland Lifecycle Quality Management Solution, and why this is relevant now. But it sounds like something that will be increasingly relevant in the near future. I want to thank you also, Brad.

Johnson: Thank you, Dana. It was my pleasure to be here.

Gardner:
This is Dana Gardner. I am principal analyst at Interarbor Solutions. You’ve been listening to BriefingsDirect. Thanks for joining us.

Podcast Sponsor: Borland Software.

Listen to the podcast here.

Transcript of Dana Gardner’s BriefingsDirect podcast on application development lifecycle quality. Copyright Interarbor Solutions, LLC, 2005-2006. All rights reserved.

No comments:

Post a Comment